Saturday, 14 March 2015

The Mysterious Dwarf Planet Ceres

“The successful launch of Mission Mars has enhanced the interest of common people in space mysteries. There are many unsolved, unknown stories asteroids in the space. I have gone through many such true findings using NASA and other sources. The present story covers presence of a new asteroid near our own earth”
We had a craze for the Moon. We now have a craze for Mars. Do we miss something? The asteroid belt? Simply speaking, it consists of non-spherical uninteresting blocks of rocks which float around between the earth and the sun, not worth a travel or mention. Fret not, because something large roams between those rocks, which is spherical in shape and as much is a Dwarf Planet.
As it has always been, I started following the NASA Dawn space probe[1], launched in 2007 to take a peek into Ceres, our relatively lesser known dwarf planet neighbour residing in a the unstable asteroid belt between the Earth and our new affair Mars. It has taken the satellite 7.5 years to reach its destination, and it will spend the next 14 months mapping the diminutive world. The $473 million US Dawn mission is the first to target two different celestial objects to better understand how the solar system evolved. It's powered by ion propulsion engines, which provide gentle yet constant acceleration, making it more efficient than conventional rocket fuel. With its massive solar wings unfurled, it measures about 20 metres, the length of a tractor-trailer.
It is the only dwarf planet in the inner Solar System and the only object in the asteroid belt known to be unambiguously rounded by its own gravity! So much so, by closer observations it looks just like a miniature Luna, our moon. Now, is it a coincidence that Ceres has a surface area roughly equal to our country India? Ceres is the largest object in the asteroid belt. The mass of Ceres has been determined by analysis of the influence it exerts on smaller asteroids. Results differ slightly between researchers. The average of the three most precise values as of 2008 is 9.4×1020 kg. With this mass Ceres comprises about a third of the estimated total 3.0 ± 0.2×1021 kg mass of the asteroid belt.

This image was taken on 25th of February 2015
Dawn space probe will study Ceres for 16 months. At the end of the mission, it will stay in the lowest orbit indefinitely and it could remain there for hundreds of years.
The first thing that comes to a laymen’s mind is the question of habitability. Although not as actively discussed as a potential home for microbial extraterrestrial life as Mars, Titan, Europa or Enceladus, the presence of water ice has led to speculation that life may exist there, and that hypothesized ejecta could have come from Ceres to Earth.
The recent look into Ceres have answered many questions, but raised a lot more. Researchers think Ceres' interior is dominated by a rocky core topped by ice that is then insulated by rocky lag deposits at the surface. A big question the mission hopes to answer is whether there is a liquid ocean of water at depth. Some models suggest there could well be. The evidence will probably be found in Ceres' craters which have a muted look to them. That is, the soft interior of Ceres has undoubtedly had the effect of relaxing the craters' original hard outline.

One big talking point has dominated the approach to the object: the origin and nature of two very bright spots seen inside a 92km-wide crater in the Northern Hemisphere. I speculate that those bright spots are some kind of icy cryogenic volcanos which sound cool enough to me (pun intended).
I believe that Ceres will eventually tell us something about the origins of our solar system. In ancient astronomical times, Ceres was on its way to form a planetary embryo and would have merged with other objects to form a terrestrial planet, but the evolution was stopped somehow and its form was kept intact.
We will have more of Ceres soon, as the Chinese are planning their own mission in the next decade. I hope India will follow suit…..




References
2.   http://www.bbc.com/news/science-environment-31754586ecraft-nasa/24485279/
4.   NASA Jet Propulsion Laboratory raw data




[1] NASA Dawn Probe is space craft for analysing asteroid belt.
[1] The author is perusing engineering in Electronics and Communications from Manipal University, Jaipur. He is keen observer of space research and technology. He may be contacted at sankrant.chaubey@gmail.com
[1] NASA Dawn Probe is space craft for analysing asteroid belt.


Article by:
Sankrant Chaubey
Runner up for Techscribe 

The next “play” of technology

Technology. What is it?
I like one of the definitions that Alan Kay has for technology. He says “Technology is anything that was invented after you were born”. Yeah, give it a thought. Sounds just about right. Also it sums up a lot of what we're talking about these days. Danny Hillis actually has an update on that ,he says “Technology is anything that doesn't quite work yet”.
The aspect of technology that I am concerned with here is it’s capability of interfacing the digital and the physical world. The digital world is a product of computer science and it is the job of technology to allow humans to interact with it. For this we had started with keyboards and mouses connected to computers. Now we have touch screens and other novel techniques like speech/gesture control and augmented reality. Also I’d like to clarify here that augmented reality is not virtual reality . It is more than just a very life like rendition of videos recorded on cameras, it also involves manipulation and extraction of important information by either human or artificial intelligence.
Now what is the ultimate dream for technology? I think it is that it should evolve in ways that it becomes more independent of human input and becomes intelligent enough to do our mundane chores for us. We are already heading towards it in the form of automated robots in manufacturing factories, cruise control in cars, computer games like FIFA (which involve artificial decision making to some extent) etc.

Hold that thought and think what is the current scenario of tech use in sports? While for the broadcast of it, very new technologies are being used but for the actual gameplay it is at a very incipient stage. Only now has FIFA decided on allowing managers to receive real time information like players’ fatigue, oxygen and lactic acid levels from sensors attached to their bodies to decide on tactical substitutions . But again this isn’t that directly involved with how the game is being played.
Now what would it be like to combine the two fields of, artificial intelligence and human computer interaction and use them to improve the gameplay . Well that would be one hell of a pie.
What if during a free kick the defending team players are informed of the possible set piece routine they may be facing so they can adjust accordingly to thwart any surprise moves by the attacking team. This could have been done by a computer observing the positions of the attackers through cameras and checking through its database the past set piece routines used anywhere in the world; intelligently deciding how the attacking team might be varying it and then informing each defender how to best position himself through an earpiece.
There are many possibilities to its use. An attacker can be informed about defenders catching up on him from behind when he normally would have been blind sighted.
 He could be told which best direction to take a free kick or a shoot according to the goalkeeper’s position.
Taking it to the next level, the player starting a move can be informed which player to pass to and then that player would be told whom to pass to and so on. A very fluid and unstoppable move could be managed from its start to end by an artificially intelligent computer. Presenting all this information in the best possible way through augmented reality is the job of the IT department of the team. For the bad teams this is information overload while the good teams it would be a boon.
The detractors would say this takes away the human element from the game. But similar things are already being used in sports. In fact in 1994 NFL teams installed radios on their quarterback’s helmet to pass on instructions to him. The idea was similar, to aid the player to make better decisions, only that the helper then was the assistant coach and now it would be a computer which would obviously be faster and more accurate. Moreover it is human tendency to do less labour, mental or physical. So the system reduces just that extra mental load a bit. It would still require players with skill, to act on the proposed plan.

 This would make the game faster, more accurate, more enjoyable and as a result attract more audience. So a win win situation for all involved.
As a final thought, could technology evolve and get refined to a level high enough to do such things?

All I can say ‘The best way to predict the future is to invent it’.


Article by:Shreyance TewariECE 3rd year MUJWinner of Techscribe 

Thursday, 12 March 2015

Smart healthcare

A long time ago, a revolution took place in the field of healthcare. Human anatomy was recognized as a science, and doctors operating on people were started to be held in high esteem. Healthy living habits were generated and spread through the masses, but unfortunately were never fully realized.
Living in India, we know that even though our country is one of the leading nations in healthcare research, much of its rural population rarely has basic healthcare available to it. Part of the problem lies in the fact that the healthcare budget of our country is very low in comparison to other nations. That is the why many of the citizens still die of diseases that could be easily cured. Many people die without ever realizing there was something wrong with them, or they don’t find out about it until it’s too late. Some can’t keep a proper tab on their situation because they can’t get regular checkups and ultimately suffer for it.  If only there was a way we could stop this, a way that could be implemented in our country’s rather small healthcare budget and a limited number of doctors, a way accessible for all, a cost-effective way. A smart way.
Enter the world of smart healthcare. A way to automate and expand the better standard of living modern technology has been introduced to even the remotest of areas. Let’s consider the scenario of a village that does have basic mobile connectivity, a rudimentary healthcare system and a bunch of literate people. By itself it can’t diagnose serious diseases, especially the ones that are disguised as something very normal, like swine flu whose most symptoms are consistent with common cold. But that system when paired with an expert on video chat, using mobile connectivity via a tablet etc. can help save many lives in the village and contain that disease. Called the tele-health it is the most effective system for inducing cheap and reliable healthcare, not just in India, but in fact in countless countries by simply eliminating physical presence.
But that’s not all, for smart healthcare comes with a variety of devices that make us healthier every day. Pedometer is one of the most common example of these devices. By telling us how much we actually ran or walked in a day, not only does it help us track our fitness, but also motivates us to be healthier. And now with introduction of smartwatches and smart bands like Fitbit, it’s easier than ever, as the device now simply lives in these everyday gadgets. Talking about these gadgets, they even measure heart rate through the day, which is beneficial for people having heart diseases or hypertension as they detect and trigger a warning if something starts going wrong, potentially saving hundreds of lives.
Same goes for specially made bands for people suffering with epilepsy, cancer or some terminal disease, which trigger an alarm and send location data of patient to their family members in case of an emergency. One of the most revolutionary change has been brought into diabetes treatment with the introduction of test strips which detect blood glucose level instantly and can be used by anyone enabling the patients to keep their vitals steady and under check.
There also are almost similar methods, using which life threatening diseases such as AIDS can be diagnosed quickly and cheaply. Smart devices such as these are going to make healthcare better everywhere and cheaper, and also expand the reach of system to the remotest areas. What makes it so powerful is that it can do all this by simply using the current system.


Wednesday, 4 March 2015

Build Green

“Anybody can write a book and most of them are doing it; but it takes brains to build a house.”
Charles F. Lummis, United States journalist and Indian rights and historic preservation activist.

But then again, our ancestors, blinded by the glare of industrialization and modernization, flouted what was said and so boomed an age, where green lushes were replaced by concrete jungles which our forefathers saw as a mark of pride and dignity. However, this was short-lived. Inapt use of building materials, absence of proper construction methodologies, failure in optimum execution and maintenance, and lack of research over the past few decades left an ecological footprint of loss and wastage of natural resources and destroyed the splendor bestowed upon this planet and its beings. The environmental impression of concrete, its production and uses, has been multifaceted. Concrete is one of the primary producers of carbon dioxide, a major greenhouse gas.
We are not primitive anymore. We have become aware of the environment, we are wary of the mistakes that our ancestors have made. We vivaciously dream about new technologies and make efforts to mold them into reality.
Nowadays, “GOING GREEN” has become a top priority in our society, and sustainable buildings and design are at the forefront of this green revolution.

What is a green building?
The answer to this is quite simple. A green building is one whose construction and lifetime operation assure the healthiest possible environment while representing the most efficient and least disruptive use of land, water, energy and resources. The optimum design solution is one that effectively imitates all of the natural systems and condition of the pre developed site after development is complete. A green building is a smart building. It senses and reacts accordingly and caters to the needs of user and local environment.


Do green buildings cost more?
Many green strategies, if blended well cost less. However the question is not about the price but its efficacy.  For instance, use of high performance windows and window frames increases cost of building envelope in the first place, but the resulting reduction in building lighting, and temperature and carbon emission can be reduced significantly.
A green building reduces capital costs, maintenance costs, operation costs, risk and liabilities and enhances social and environmental serenity of a place.
Constructing a green building is a complex integrated process in which design elements are re-evaluated, integrated and optimized as a part of the whole building solution.

Which is green and which is not?             
The Toronto based World Green Building Council currently recognizes 20 established green building council around the world.  A couple of them are Leadership in Energy and Environmental Design (LEED), GRIHA (INDIA).

Are there any green buildings in India?
Indira Paryavaran Bhawan is India’s first on site net zero building. The building is expected to qualify as a five-star GRIHA and to have a LEED Platinum rating. The building has its own solar power plant, sewage treatment facility, and geothermal heat exchange system.

Manipal University Jaipur’s academic and administrative buildings have been awarded LEED Platinum Certificate & GRIHA award for water management.


Green Energy

Google co-founder Larry Page is fond of saying, that if you choose a harder problem to tackle, you’ll have less competition. This philosophy has taken a plentitude of their conceptions to the moon: a translation engine that knows 80 languages, the world’s greatest search engine, self-driving cars, and the wearable computer system called Google Glass just to name a few.
Then the technology behemoth decided to tackle the world’s climate and energy sector. After committing tremendously large amounts of resources for the cause, it succeeded in establishing a few of the world’s most efficient data centers, purchased large quantities of renewable energy, and offset what remained of its carbon footprint.
When the ostentatiously ambitious RE<C in 2007 was established, we all may have expected another “moonshot” from the tech giants. But unfortunately that never really left the earth’s orbit. In 2011 Google put curtains down to the initiative which had a primary aim of making renewable energy compete with the coal industry. Two of their engineers Ross Koningstein and David Fork stated that “Trying to combat climate change exclusively with today’s renewable energy technologies simply won’t work; we need a fundamentally different approach.”
Following the aforementioned decision to suspend their R&D efforts in RE<C, Google has directly invested more than $1 billion directly in solar and wind projects. The company succeeded in acquiring enough renewable energy to offset its emissions. Google’s efforts have also brought down the average cost of renewables to rival the cost of construction of coal plants.
“You’d think the thrill might wear off this whole renewable energy investing thing after a while. Nope—we’re still as into it as ever,” stated the company buoyantly in a blog post last fall.
That been said, Google has been using renewable energy to power 35% of their operations, and are striving to look  for ways to ameliorate the use of clean energy. This includes trying new, innovative technology at their offices and purchasing green power near their data centers.


In addition to 1.9 MW solar arrays, other forms of renewable energy have been incorporated. This includes running a 970 kW cogeneration unit off local landfill gas, which not only removes the methane, a particularly potent greenhouse gas, but converts it into electricity and heat that are used on the campus. Efficient ground source heat pumps and solar water heating on office buildings in Mountain View, Hyderabad, and Tel Aviv have been set up.
Google has also signed six large-scale Power Purchase Agreements (PPAs) that are long-term financial commitments to buy renewable energy from specific facilities. 
Google has also made agreements to fund over $1.5 billion in clean energy wind and solar projects. Some of them are:
·         Regulus: Repurposing an oil and gas field for renewable energy
  • Panhandle 2 Wind Farm: financing wind in Texas
  • Recurrent Energy: solar facilities in California and Arizona
  • Jasper Power Project: investing in South African solar
  • Spinning Spur Wind Farm: investing in West Texas wind
  • Rippey Wind Farm: financing wind power in Iowa
  • SolarCity: solar for thousands of residential rooftops
  • Atlantic Wind Connection: a superhighway for clean energy transmission
  • Alta Wind Energy Center: harnessing winds of the Mojave
  • Shepherd’s Flat: one of the world’s largest wind farms
  • Photovoltaics in Germany: investing in clean energy overseas
But the most exciting one for me is that Google X is acquiring the high altitude wind startup Makani Power.

Makani Power has been fabricating and testing a new design of wind turbine that is attached to a tether (that could be 600 meters long) and which rotates high above the ground, capturing wind that is stronger and more consistent than what is typically found on the ground. The idea behind the innovation is that capturing high altitude wind could be cheaper, more efficient, and more apropos for certain environments like offshore than traditional wind turbines.

This particular idea does sound crazy. But I unequivocally believe that we need crazy and innovative ideas if we want to move towards a more sustainable and greener future because as Steve Jobs said
“…because the ones who are crazy enough to think that they can change the world, are the ones who do.”


Sunday, 1 March 2015

On an epic journey of Science: Interstellar 2



In the last part, we got to know about the astounding science behind the wormhole. But that’s not all, Interstellar’s greatest spectacle is its blackhole and the accretion disk surrounding it. A major plot point of the movie is the time dilation effect experienced near the blackhole. Kip Thorne, Caltech physicist and theorist, as well as the scientific advisor for Interstellar told them straight off the bat, that to accomplish such a time dilation effect realistically on a massive scale, they would need a humongous blackhole, or as properly termed in astrophysics, a supermassive blackhole. This kind of supermassive blackholes  are generally found in the center of galaxies, and keep the galaxies rotating. To show such a massive blackhole with extreme mathematical accuracy and its gigantic size when being shown against the tiny human spaceship was real hard work and to portray it realistically, 3D was written off.
The blackhole that was generated using Thorne’s calculation was extremely big, and if compared in size to our solar system, the body itself would extend up to earth’s orbit and its accretion disks beyond the orbit of mars. This was named Gargantua in the movie.

This blackhole, Gargantua’s mass is 100 million times of that of the sun. It is 10 billion light years away from earth and rotates at an astounding 99.6 % of the speed of light.
We already know how a singularity is created. In case of Gargantua its mass and speed of rotation create an extremely strong gravity field, which bends the space time fabric beyond the event horizon, and pulls light and time from beyond the ascension of the singularity in the bulk. Einstein termed this as the time dilation effect experienced around a blackhole. This means, if you were close to a blackhole, then our perceptions of time and space would diverge. Relatively speaking, time would seem to be going faster for me. This is in accordance with relativity, according to which time passes slowly in high gravity fields.
In Interstellar, the planet they visit exists at a distance from the event horizon that 1 hour on the planet they visit is equal to 7 years on Earth. Graphically this dilation can be shown as shown in the following figure.

Another problem they faced during the making of the film was the scientific plausibility of the survivability of the planets orbiting close to Gargantua. Now, it seems that no planet can endure the extremely high gravitational field resulting from the blackhole, which is the reason for time dilation. However, it turned out that it in fact is possible but the condition is that the blackhole needs to be spinning very fast, fast enough that the any other object in circular orbit around Garagantua be spared the destructive effect of such a high gravitational field. Hence Garagantua rotates at 99.6 % of the speed of light.
In addition to this, the accretion disk of the blackhole also posed a problem. Accretion disks are ring like circular disks made of gases that flow into the blackhole and are comparable to rings present around Saturn. The problem with the accretion disks is that they are very energetic and emit a lot of fatal x-rays and gamma rays which should have fried the astronauts alive as soon as they reached anywhere near a blackhole of Gargantua’s size. But this was rectified by placing the blackhole in such a phase where its accretion disk is in an anemic state and is cooling down with its temperature at the time of visit similar to temperature of the surface of sun. This doesn’t emit the x-rays and gamma rays a normal energetic accretion disk would, thus not killing the astronauts as well as making life possible on the planets orbiting the blackhole. Now, of course such a cooled down state of an accretion disk has never been discovered but that is due to the lack of sensitive technology for far out space exploration, as the existing technology can only read high energy outputs and such cooled down states are invisible to it. In fact, Igor Novikov, a Russian scientist had worked out the relativistic theory of thin accretion disks back in 1970.
After making the existence of Gargantua in the movie as scientifically accurate as it was possible, the team faced the problem of creating the phenomenon on screen. For the wormhole, they had designed a new renderer which could treat light’s path curved rather than just straight, and had successfully gotten a wormhole out of it. So, they decided to use the same method for the blackhole. But blackholes as suggested by the name are a murder of light, such that light coming from a source wouldn’t keep travelling to infinity as is the property of rays, but dies within the black hole. This caused an Einstein-ian effect called gravitational lensing in the renderer due to which the bendy bits of distortion, i.e., wherever the light bent and wasn’t travelling in a straight line, overtaxed the computation such that some of the individual frames each took up to 100 hours to render. In the end the movie brushed up against 800 terabytes of data.
But the movie was in 2D, and after all this innovative imagery used in making of the blackhole, it would have ended up looking like a flat 2D disk in the 2D visual medium, despite its existence as a fully sized 3D render. Chris handed the task of making the blackhole look like a 3D sphere, rather than a flat disk to the head of the CGI team of Interstellar, Paul Franklin. He picked up the idea of using an accretion disk found around some blackholes to define its sphere. This accretion disk would later become a major plot point in the story as we all know.
Franklin had Von Tunzelmann attempt a tricky demo to try out how the blackhole looked like with an accretion disk. She generated a flat, multicolored ring- a stand-in for the accretion disk—and positioned it around their spinning black hole. This resulted in something unprecedented and extremely amazing. The space warping around the blackhole also warped the accretion disk. So instead of looking like Saturn’s ring, the light created an extraordinary halo around the blackhole.
The Double Negative team (the company working the CGI of Interstellar) thought of it as a bug until it was shown to Thorne. It lead to a moment of discovery where Thorne realized that the team had correctly modeled a phenomenon inherent in the math he’d supplied.
No one knew how would a blackhole looks like until they built one. Light, temporarily trapped around the blackhole, produced an unexpected complex fingerprint pattern near the black hole’s shadow, And the glowing accretion disk appeared above the black hole, below the blackhole, and in front of it. Thorne had never expected it, but later he realized that the phenomenon had been there in the math forever, just waiting to be unlocked. In the end Nolan got his visually immersive movie, Thorne got his wish of making a movie that taught its audience some accurate science and both of them got something they never expected, a scientific discovery. That’s why the appearance of the blackhole in the movie is visually so complex, because it’s accurate.


It’s no doubt that Interstellar came around together beautifully; the merger of real science and stunning visuals have transformed this movie into a science fiction classic, where the science is barely fictional but yet so beyond our reach that it can be realized right now only in fiction but might be tapped into in the future. Thorne also hoped that this movie might act as a bait for the viewers as some of them might be attracted towards the field of astrophysics and consider a career in it rather than become a lawyer, a doctor and other professional jobs. In writing this article, I resound the same feeling as Thorne that after reading this I hope someone starts taking the field of astrophysics seriously as their future, which is right now the field of physics with the most possibilities, with so less known and so much still to discover. And who knows? In some dystopian future, this might just save us.

Sunday, 22 February 2015

Why we need self driving cars

After being enlightened about Google’s new self-driving electric car, I spent my whole afternoon cheating on FIFA whilst reading about this god-sent technology.

Instead of a steering wheel and pedals this battery-powered electric vehicle has as a stop-go button. These novel prototypes have a plastic build for the most part. But they have limited speed as a battery/electric propulsion system restricts the maximum speed to about 40kph (25 mph). Google has planned to manufacture around 200 of these extremely cute, mostly-plastic cars over the next year, albeit restricting road testing to California for the next couple of years.
Well, now without any further ado, allow me to tell you why we need self-driving cars:

1.      1.First and foremost, we humans are flawed beings! We snooze, we text, we eat behind the wheel. Not to mention drunk imbeciles revolting against speed limits and traffic rules! Road accidents have become such a primary cause of death in our country that probably even the “Grim Reaper” is begging for mercy. More than 100,000 deaths due car accidents and there are dimwits who’ve still failed to learn.

2.We need these cars to take over roads soon for there is definitely a plentitude from our flawed race who’ll follow suit!
2.      Now, questions like “how much will these cars cost?” will arise. But instead of thinking superficially, we should delve deeper and look at the fact that there’s a plethora of disabled people in the world who work. We can’t ignore how this technology could transform the lives of the elderly or the disabled.

.
3.      3.The cars use a mixture of 3D laser-mapping, GPS, and radar to analyze and interpret their surroundings. The radar is interesting as it allows the car to see through objects, instead of relying on line-of-sight. As of now they can’t process a variety of complex situations. But, Google is hoping that with significant development, eventually the cars will be able to handle all of this as well (or better) than a human can.

4.      4.These cars are adorable!
If you take other EVs into consideration like the Tesla Model S or the Toyota Prius, they have a more aggressive and demeaning stance. And the most intriguing thing is that these cars were designed to look so endearing. Why so? Well, the answer to that is human psychology.  Our brains are hardwired to treat inanimate or animate objects with utmost care, caution and reverence if they resemble living things because our moral compasses would snap into place
5.      
      5.There’ll definitely be myriad skeptics who’ll think how autonomous vehicles could take over the highway. But what they need to understand is that a robot is differentiating cars from pedestrians. Millions of photons are being fired from a laser and interpreting, processing, and reacting to the hand signals of a biker, while doing so. We need to understand that instead of an organic brain which has had millions of years to evolve and yet fumbles at intersections, an artificial brain which was born less than a decade will be our chauffer. And it obviously needs to evolve.
      So why don’t we ignore some temporary shortcomings and thank Google for trying to eliminate human error from a chore which has been entirely controlled by humans for decades.
Let’s embrace innovation and get ready for a revolution in transportation.