Saturday 14 March 2015

The Mysterious Dwarf Planet Ceres

“The successful launch of Mission Mars has enhanced the interest of common people in space mysteries. There are many unsolved, unknown stories asteroids in the space. I have gone through many such true findings using NASA and other sources. The present story covers presence of a new asteroid near our own earth”
We had a craze for the Moon. We now have a craze for Mars. Do we miss something? The asteroid belt? Simply speaking, it consists of non-spherical uninteresting blocks of rocks which float around between the earth and the sun, not worth a travel or mention. Fret not, because something large roams between those rocks, which is spherical in shape and as much is a Dwarf Planet.
As it has always been, I started following the NASA Dawn space probe[1], launched in 2007 to take a peek into Ceres, our relatively lesser known dwarf planet neighbour residing in a the unstable asteroid belt between the Earth and our new affair Mars. It has taken the satellite 7.5 years to reach its destination, and it will spend the next 14 months mapping the diminutive world. The $473 million US Dawn mission is the first to target two different celestial objects to better understand how the solar system evolved. It's powered by ion propulsion engines, which provide gentle yet constant acceleration, making it more efficient than conventional rocket fuel. With its massive solar wings unfurled, it measures about 20 metres, the length of a tractor-trailer.
It is the only dwarf planet in the inner Solar System and the only object in the asteroid belt known to be unambiguously rounded by its own gravity! So much so, by closer observations it looks just like a miniature Luna, our moon. Now, is it a coincidence that Ceres has a surface area roughly equal to our country India? Ceres is the largest object in the asteroid belt. The mass of Ceres has been determined by analysis of the influence it exerts on smaller asteroids. Results differ slightly between researchers. The average of the three most precise values as of 2008 is 9.4×1020 kg. With this mass Ceres comprises about a third of the estimated total 3.0 ± 0.2×1021 kg mass of the asteroid belt.

This image was taken on 25th of February 2015
Dawn space probe will study Ceres for 16 months. At the end of the mission, it will stay in the lowest orbit indefinitely and it could remain there for hundreds of years.
The first thing that comes to a laymen’s mind is the question of habitability. Although not as actively discussed as a potential home for microbial extraterrestrial life as Mars, Titan, Europa or Enceladus, the presence of water ice has led to speculation that life may exist there, and that hypothesized ejecta could have come from Ceres to Earth.
The recent look into Ceres have answered many questions, but raised a lot more. Researchers think Ceres' interior is dominated by a rocky core topped by ice that is then insulated by rocky lag deposits at the surface. A big question the mission hopes to answer is whether there is a liquid ocean of water at depth. Some models suggest there could well be. The evidence will probably be found in Ceres' craters which have a muted look to them. That is, the soft interior of Ceres has undoubtedly had the effect of relaxing the craters' original hard outline.

One big talking point has dominated the approach to the object: the origin and nature of two very bright spots seen inside a 92km-wide crater in the Northern Hemisphere. I speculate that those bright spots are some kind of icy cryogenic volcanos which sound cool enough to me (pun intended).
I believe that Ceres will eventually tell us something about the origins of our solar system. In ancient astronomical times, Ceres was on its way to form a planetary embryo and would have merged with other objects to form a terrestrial planet, but the evolution was stopped somehow and its form was kept intact.
We will have more of Ceres soon, as the Chinese are planning their own mission in the next decade. I hope India will follow suit…..




References
2.   http://www.bbc.com/news/science-environment-31754586ecraft-nasa/24485279/
4.   NASA Jet Propulsion Laboratory raw data




[1] NASA Dawn Probe is space craft for analysing asteroid belt.
[1] The author is perusing engineering in Electronics and Communications from Manipal University, Jaipur. He is keen observer of space research and technology. He may be contacted at sankrant.chaubey@gmail.com
[1] NASA Dawn Probe is space craft for analysing asteroid belt.


Article by:
Sankrant Chaubey
Runner up for Techscribe 

The next “play” of technology

Technology. What is it?
I like one of the definitions that Alan Kay has for technology. He says “Technology is anything that was invented after you were born”. Yeah, give it a thought. Sounds just about right. Also it sums up a lot of what we're talking about these days. Danny Hillis actually has an update on that ,he says “Technology is anything that doesn't quite work yet”.
The aspect of technology that I am concerned with here is it’s capability of interfacing the digital and the physical world. The digital world is a product of computer science and it is the job of technology to allow humans to interact with it. For this we had started with keyboards and mouses connected to computers. Now we have touch screens and other novel techniques like speech/gesture control and augmented reality. Also I’d like to clarify here that augmented reality is not virtual reality . It is more than just a very life like rendition of videos recorded on cameras, it also involves manipulation and extraction of important information by either human or artificial intelligence.
Now what is the ultimate dream for technology? I think it is that it should evolve in ways that it becomes more independent of human input and becomes intelligent enough to do our mundane chores for us. We are already heading towards it in the form of automated robots in manufacturing factories, cruise control in cars, computer games like FIFA (which involve artificial decision making to some extent) etc.

Hold that thought and think what is the current scenario of tech use in sports? While for the broadcast of it, very new technologies are being used but for the actual gameplay it is at a very incipient stage. Only now has FIFA decided on allowing managers to receive real time information like players’ fatigue, oxygen and lactic acid levels from sensors attached to their bodies to decide on tactical substitutions . But again this isn’t that directly involved with how the game is being played.
Now what would it be like to combine the two fields of, artificial intelligence and human computer interaction and use them to improve the gameplay . Well that would be one hell of a pie.
What if during a free kick the defending team players are informed of the possible set piece routine they may be facing so they can adjust accordingly to thwart any surprise moves by the attacking team. This could have been done by a computer observing the positions of the attackers through cameras and checking through its database the past set piece routines used anywhere in the world; intelligently deciding how the attacking team might be varying it and then informing each defender how to best position himself through an earpiece.
There are many possibilities to its use. An attacker can be informed about defenders catching up on him from behind when he normally would have been blind sighted.
 He could be told which best direction to take a free kick or a shoot according to the goalkeeper’s position.
Taking it to the next level, the player starting a move can be informed which player to pass to and then that player would be told whom to pass to and so on. A very fluid and unstoppable move could be managed from its start to end by an artificially intelligent computer. Presenting all this information in the best possible way through augmented reality is the job of the IT department of the team. For the bad teams this is information overload while the good teams it would be a boon.
The detractors would say this takes away the human element from the game. But similar things are already being used in sports. In fact in 1994 NFL teams installed radios on their quarterback’s helmet to pass on instructions to him. The idea was similar, to aid the player to make better decisions, only that the helper then was the assistant coach and now it would be a computer which would obviously be faster and more accurate. Moreover it is human tendency to do less labour, mental or physical. So the system reduces just that extra mental load a bit. It would still require players with skill, to act on the proposed plan.

 This would make the game faster, more accurate, more enjoyable and as a result attract more audience. So a win win situation for all involved.
As a final thought, could technology evolve and get refined to a level high enough to do such things?

All I can say ‘The best way to predict the future is to invent it’.


Article by:Shreyance TewariECE 3rd year MUJWinner of Techscribe 

Thursday 12 March 2015

Smart healthcare

A long time ago, a revolution took place in the field of healthcare. Human anatomy was recognized as a science, and doctors operating on people were started to be held in high esteem. Healthy living habits were generated and spread through the masses, but unfortunately were never fully realized.
Living in India, we know that even though our country is one of the leading nations in healthcare research, much of its rural population rarely has basic healthcare available to it. Part of the problem lies in the fact that the healthcare budget of our country is very low in comparison to other nations. That is the why many of the citizens still die of diseases that could be easily cured. Many people die without ever realizing there was something wrong with them, or they don’t find out about it until it’s too late. Some can’t keep a proper tab on their situation because they can’t get regular checkups and ultimately suffer for it.  If only there was a way we could stop this, a way that could be implemented in our country’s rather small healthcare budget and a limited number of doctors, a way accessible for all, a cost-effective way. A smart way.
Enter the world of smart healthcare. A way to automate and expand the better standard of living modern technology has been introduced to even the remotest of areas. Let’s consider the scenario of a village that does have basic mobile connectivity, a rudimentary healthcare system and a bunch of literate people. By itself it can’t diagnose serious diseases, especially the ones that are disguised as something very normal, like swine flu whose most symptoms are consistent with common cold. But that system when paired with an expert on video chat, using mobile connectivity via a tablet etc. can help save many lives in the village and contain that disease. Called the tele-health it is the most effective system for inducing cheap and reliable healthcare, not just in India, but in fact in countless countries by simply eliminating physical presence.
But that’s not all, for smart healthcare comes with a variety of devices that make us healthier every day. Pedometer is one of the most common example of these devices. By telling us how much we actually ran or walked in a day, not only does it help us track our fitness, but also motivates us to be healthier. And now with introduction of smartwatches and smart bands like Fitbit, it’s easier than ever, as the device now simply lives in these everyday gadgets. Talking about these gadgets, they even measure heart rate through the day, which is beneficial for people having heart diseases or hypertension as they detect and trigger a warning if something starts going wrong, potentially saving hundreds of lives.
Same goes for specially made bands for people suffering with epilepsy, cancer or some terminal disease, which trigger an alarm and send location data of patient to their family members in case of an emergency. One of the most revolutionary change has been brought into diabetes treatment with the introduction of test strips which detect blood glucose level instantly and can be used by anyone enabling the patients to keep their vitals steady and under check.
There also are almost similar methods, using which life threatening diseases such as AIDS can be diagnosed quickly and cheaply. Smart devices such as these are going to make healthcare better everywhere and cheaper, and also expand the reach of system to the remotest areas. What makes it so powerful is that it can do all this by simply using the current system.


Wednesday 4 March 2015

Build Green

“Anybody can write a book and most of them are doing it; but it takes brains to build a house.”
Charles F. Lummis, United States journalist and Indian rights and historic preservation activist.

But then again, our ancestors, blinded by the glare of industrialization and modernization, flouted what was said and so boomed an age, where green lushes were replaced by concrete jungles which our forefathers saw as a mark of pride and dignity. However, this was short-lived. Inapt use of building materials, absence of proper construction methodologies, failure in optimum execution and maintenance, and lack of research over the past few decades left an ecological footprint of loss and wastage of natural resources and destroyed the splendor bestowed upon this planet and its beings. The environmental impression of concrete, its production and uses, has been multifaceted. Concrete is one of the primary producers of carbon dioxide, a major greenhouse gas.
We are not primitive anymore. We have become aware of the environment, we are wary of the mistakes that our ancestors have made. We vivaciously dream about new technologies and make efforts to mold them into reality.
Nowadays, “GOING GREEN” has become a top priority in our society, and sustainable buildings and design are at the forefront of this green revolution.

What is a green building?
The answer to this is quite simple. A green building is one whose construction and lifetime operation assure the healthiest possible environment while representing the most efficient and least disruptive use of land, water, energy and resources. The optimum design solution is one that effectively imitates all of the natural systems and condition of the pre developed site after development is complete. A green building is a smart building. It senses and reacts accordingly and caters to the needs of user and local environment.


Do green buildings cost more?
Many green strategies, if blended well cost less. However the question is not about the price but its efficacy.  For instance, use of high performance windows and window frames increases cost of building envelope in the first place, but the resulting reduction in building lighting, and temperature and carbon emission can be reduced significantly.
A green building reduces capital costs, maintenance costs, operation costs, risk and liabilities and enhances social and environmental serenity of a place.
Constructing a green building is a complex integrated process in which design elements are re-evaluated, integrated and optimized as a part of the whole building solution.

Which is green and which is not?             
The Toronto based World Green Building Council currently recognizes 20 established green building council around the world.  A couple of them are Leadership in Energy and Environmental Design (LEED), GRIHA (INDIA).

Are there any green buildings in India?
Indira Paryavaran Bhawan is India’s first on site net zero building. The building is expected to qualify as a five-star GRIHA and to have a LEED Platinum rating. The building has its own solar power plant, sewage treatment facility, and geothermal heat exchange system.

Manipal University Jaipur’s academic and administrative buildings have been awarded LEED Platinum Certificate & GRIHA award for water management.


Green Energy

Google co-founder Larry Page is fond of saying, that if you choose a harder problem to tackle, you’ll have less competition. This philosophy has taken a plentitude of their conceptions to the moon: a translation engine that knows 80 languages, the world’s greatest search engine, self-driving cars, and the wearable computer system called Google Glass just to name a few.
Then the technology behemoth decided to tackle the world’s climate and energy sector. After committing tremendously large amounts of resources for the cause, it succeeded in establishing a few of the world’s most efficient data centers, purchased large quantities of renewable energy, and offset what remained of its carbon footprint.
When the ostentatiously ambitious RE<C in 2007 was established, we all may have expected another “moonshot” from the tech giants. But unfortunately that never really left the earth’s orbit. In 2011 Google put curtains down to the initiative which had a primary aim of making renewable energy compete with the coal industry. Two of their engineers Ross Koningstein and David Fork stated that “Trying to combat climate change exclusively with today’s renewable energy technologies simply won’t work; we need a fundamentally different approach.”
Following the aforementioned decision to suspend their R&D efforts in RE<C, Google has directly invested more than $1 billion directly in solar and wind projects. The company succeeded in acquiring enough renewable energy to offset its emissions. Google’s efforts have also brought down the average cost of renewables to rival the cost of construction of coal plants.
“You’d think the thrill might wear off this whole renewable energy investing thing after a while. Nope—we’re still as into it as ever,” stated the company buoyantly in a blog post last fall.
That been said, Google has been using renewable energy to power 35% of their operations, and are striving to look  for ways to ameliorate the use of clean energy. This includes trying new, innovative technology at their offices and purchasing green power near their data centers.


In addition to 1.9 MW solar arrays, other forms of renewable energy have been incorporated. This includes running a 970 kW cogeneration unit off local landfill gas, which not only removes the methane, a particularly potent greenhouse gas, but converts it into electricity and heat that are used on the campus. Efficient ground source heat pumps and solar water heating on office buildings in Mountain View, Hyderabad, and Tel Aviv have been set up.
Google has also signed six large-scale Power Purchase Agreements (PPAs) that are long-term financial commitments to buy renewable energy from specific facilities. 
Google has also made agreements to fund over $1.5 billion in clean energy wind and solar projects. Some of them are:
·         Regulus: Repurposing an oil and gas field for renewable energy
  • Panhandle 2 Wind Farm: financing wind in Texas
  • Recurrent Energy: solar facilities in California and Arizona
  • Jasper Power Project: investing in South African solar
  • Spinning Spur Wind Farm: investing in West Texas wind
  • Rippey Wind Farm: financing wind power in Iowa
  • SolarCity: solar for thousands of residential rooftops
  • Atlantic Wind Connection: a superhighway for clean energy transmission
  • Alta Wind Energy Center: harnessing winds of the Mojave
  • Shepherd’s Flat: one of the world’s largest wind farms
  • Photovoltaics in Germany: investing in clean energy overseas
But the most exciting one for me is that Google X is acquiring the high altitude wind startup Makani Power.

Makani Power has been fabricating and testing a new design of wind turbine that is attached to a tether (that could be 600 meters long) and which rotates high above the ground, capturing wind that is stronger and more consistent than what is typically found on the ground. The idea behind the innovation is that capturing high altitude wind could be cheaper, more efficient, and more apropos for certain environments like offshore than traditional wind turbines.

This particular idea does sound crazy. But I unequivocally believe that we need crazy and innovative ideas if we want to move towards a more sustainable and greener future because as Steve Jobs said
“…because the ones who are crazy enough to think that they can change the world, are the ones who do.”


Sunday 1 March 2015

On an epic journey of Science: Interstellar 2



In the last part, we got to know about the astounding science behind the wormhole. But that’s not all, Interstellar’s greatest spectacle is its blackhole and the accretion disk surrounding it. A major plot point of the movie is the time dilation effect experienced near the blackhole. Kip Thorne, Caltech physicist and theorist, as well as the scientific advisor for Interstellar told them straight off the bat, that to accomplish such a time dilation effect realistically on a massive scale, they would need a humongous blackhole, or as properly termed in astrophysics, a supermassive blackhole. This kind of supermassive blackholes  are generally found in the center of galaxies, and keep the galaxies rotating. To show such a massive blackhole with extreme mathematical accuracy and its gigantic size when being shown against the tiny human spaceship was real hard work and to portray it realistically, 3D was written off.
The blackhole that was generated using Thorne’s calculation was extremely big, and if compared in size to our solar system, the body itself would extend up to earth’s orbit and its accretion disks beyond the orbit of mars. This was named Gargantua in the movie.

This blackhole, Gargantua’s mass is 100 million times of that of the sun. It is 10 billion light years away from earth and rotates at an astounding 99.6 % of the speed of light.
We already know how a singularity is created. In case of Gargantua its mass and speed of rotation create an extremely strong gravity field, which bends the space time fabric beyond the event horizon, and pulls light and time from beyond the ascension of the singularity in the bulk. Einstein termed this as the time dilation effect experienced around a blackhole. This means, if you were close to a blackhole, then our perceptions of time and space would diverge. Relatively speaking, time would seem to be going faster for me. This is in accordance with relativity, according to which time passes slowly in high gravity fields.
In Interstellar, the planet they visit exists at a distance from the event horizon that 1 hour on the planet they visit is equal to 7 years on Earth. Graphically this dilation can be shown as shown in the following figure.

Another problem they faced during the making of the film was the scientific plausibility of the survivability of the planets orbiting close to Gargantua. Now, it seems that no planet can endure the extremely high gravitational field resulting from the blackhole, which is the reason for time dilation. However, it turned out that it in fact is possible but the condition is that the blackhole needs to be spinning very fast, fast enough that the any other object in circular orbit around Garagantua be spared the destructive effect of such a high gravitational field. Hence Garagantua rotates at 99.6 % of the speed of light.
In addition to this, the accretion disk of the blackhole also posed a problem. Accretion disks are ring like circular disks made of gases that flow into the blackhole and are comparable to rings present around Saturn. The problem with the accretion disks is that they are very energetic and emit a lot of fatal x-rays and gamma rays which should have fried the astronauts alive as soon as they reached anywhere near a blackhole of Gargantua’s size. But this was rectified by placing the blackhole in such a phase where its accretion disk is in an anemic state and is cooling down with its temperature at the time of visit similar to temperature of the surface of sun. This doesn’t emit the x-rays and gamma rays a normal energetic accretion disk would, thus not killing the astronauts as well as making life possible on the planets orbiting the blackhole. Now, of course such a cooled down state of an accretion disk has never been discovered but that is due to the lack of sensitive technology for far out space exploration, as the existing technology can only read high energy outputs and such cooled down states are invisible to it. In fact, Igor Novikov, a Russian scientist had worked out the relativistic theory of thin accretion disks back in 1970.
After making the existence of Gargantua in the movie as scientifically accurate as it was possible, the team faced the problem of creating the phenomenon on screen. For the wormhole, they had designed a new renderer which could treat light’s path curved rather than just straight, and had successfully gotten a wormhole out of it. So, they decided to use the same method for the blackhole. But blackholes as suggested by the name are a murder of light, such that light coming from a source wouldn’t keep travelling to infinity as is the property of rays, but dies within the black hole. This caused an Einstein-ian effect called gravitational lensing in the renderer due to which the bendy bits of distortion, i.e., wherever the light bent and wasn’t travelling in a straight line, overtaxed the computation such that some of the individual frames each took up to 100 hours to render. In the end the movie brushed up against 800 terabytes of data.
But the movie was in 2D, and after all this innovative imagery used in making of the blackhole, it would have ended up looking like a flat 2D disk in the 2D visual medium, despite its existence as a fully sized 3D render. Chris handed the task of making the blackhole look like a 3D sphere, rather than a flat disk to the head of the CGI team of Interstellar, Paul Franklin. He picked up the idea of using an accretion disk found around some blackholes to define its sphere. This accretion disk would later become a major plot point in the story as we all know.
Franklin had Von Tunzelmann attempt a tricky demo to try out how the blackhole looked like with an accretion disk. She generated a flat, multicolored ring- a stand-in for the accretion disk—and positioned it around their spinning black hole. This resulted in something unprecedented and extremely amazing. The space warping around the blackhole also warped the accretion disk. So instead of looking like Saturn’s ring, the light created an extraordinary halo around the blackhole.
The Double Negative team (the company working the CGI of Interstellar) thought of it as a bug until it was shown to Thorne. It lead to a moment of discovery where Thorne realized that the team had correctly modeled a phenomenon inherent in the math he’d supplied.
No one knew how would a blackhole looks like until they built one. Light, temporarily trapped around the blackhole, produced an unexpected complex fingerprint pattern near the black hole’s shadow, And the glowing accretion disk appeared above the black hole, below the blackhole, and in front of it. Thorne had never expected it, but later he realized that the phenomenon had been there in the math forever, just waiting to be unlocked. In the end Nolan got his visually immersive movie, Thorne got his wish of making a movie that taught its audience some accurate science and both of them got something they never expected, a scientific discovery. That’s why the appearance of the blackhole in the movie is visually so complex, because it’s accurate.


It’s no doubt that Interstellar came around together beautifully; the merger of real science and stunning visuals have transformed this movie into a science fiction classic, where the science is barely fictional but yet so beyond our reach that it can be realized right now only in fiction but might be tapped into in the future. Thorne also hoped that this movie might act as a bait for the viewers as some of them might be attracted towards the field of astrophysics and consider a career in it rather than become a lawyer, a doctor and other professional jobs. In writing this article, I resound the same feeling as Thorne that after reading this I hope someone starts taking the field of astrophysics seriously as their future, which is right now the field of physics with the most possibilities, with so less known and so much still to discover. And who knows? In some dystopian future, this might just save us.

Sunday 22 February 2015

Why we need self driving cars

After being enlightened about Google’s new self-driving electric car, I spent my whole afternoon cheating on FIFA whilst reading about this god-sent technology.

Instead of a steering wheel and pedals this battery-powered electric vehicle has as a stop-go button. These novel prototypes have a plastic build for the most part. But they have limited speed as a battery/electric propulsion system restricts the maximum speed to about 40kph (25 mph). Google has planned to manufacture around 200 of these extremely cute, mostly-plastic cars over the next year, albeit restricting road testing to California for the next couple of years.
Well, now without any further ado, allow me to tell you why we need self-driving cars:

1.      1.First and foremost, we humans are flawed beings! We snooze, we text, we eat behind the wheel. Not to mention drunk imbeciles revolting against speed limits and traffic rules! Road accidents have become such a primary cause of death in our country that probably even the “Grim Reaper” is begging for mercy. More than 100,000 deaths due car accidents and there are dimwits who’ve still failed to learn.

2.We need these cars to take over roads soon for there is definitely a plentitude from our flawed race who’ll follow suit!
2.      Now, questions like “how much will these cars cost?” will arise. But instead of thinking superficially, we should delve deeper and look at the fact that there’s a plethora of disabled people in the world who work. We can’t ignore how this technology could transform the lives of the elderly or the disabled.

.
3.      3.The cars use a mixture of 3D laser-mapping, GPS, and radar to analyze and interpret their surroundings. The radar is interesting as it allows the car to see through objects, instead of relying on line-of-sight. As of now they can’t process a variety of complex situations. But, Google is hoping that with significant development, eventually the cars will be able to handle all of this as well (or better) than a human can.

4.      4.These cars are adorable!
If you take other EVs into consideration like the Tesla Model S or the Toyota Prius, they have a more aggressive and demeaning stance. And the most intriguing thing is that these cars were designed to look so endearing. Why so? Well, the answer to that is human psychology.  Our brains are hardwired to treat inanimate or animate objects with utmost care, caution and reverence if they resemble living things because our moral compasses would snap into place
5.      
      5.There’ll definitely be myriad skeptics who’ll think how autonomous vehicles could take over the highway. But what they need to understand is that a robot is differentiating cars from pedestrians. Millions of photons are being fired from a laser and interpreting, processing, and reacting to the hand signals of a biker, while doing so. We need to understand that instead of an organic brain which has had millions of years to evolve and yet fumbles at intersections, an artificial brain which was born less than a decade will be our chauffer. And it obviously needs to evolve.
      So why don’t we ignore some temporary shortcomings and thank Google for trying to eliminate human error from a chore which has been entirely controlled by humans for decades.
Let’s embrace innovation and get ready for a revolution in transportation.









When Poop became Water



One man’s trash is another man’s treasure.With this philosophy, Bill Gates, the philanthropist has taken up the task to end the misery of billions of people when it comes to clean drinking water. But how does he do it and that too in the cheapest way possible? BY TURNING POOP INTO WATER!

Janicki Omniprocessor, developed by Peter Janicki, CEO of Janicki Bioenergy and funded by the Gates Foundation, turns sewer sludge into electricity, clean drinking water and ash. Seems the perfect solution for places where we open the tap and what comes out of it (the water) is worse than what we get from the roof when it’s raining. Bill Gates states, “At least 2 billion use latrines that aren’t properly drained. Others simply defecate in the open. The waste contaminates drinking water for millions of people and has horrific consequences. Diseases caused by poor sanitation kill around 700,000 children every year and prevent many more from fully developing both mentally and physically”. If we can develop safe, clean and affordable sanitation and waste management techniques, we can prevent many deaths and help in promoting a healthy lifestyle. One idea, as Gates suggested, is to reinvent the toilet where human waste will be destroyed or converted into a valuable resource such as fuel or fertilizer.
                                        https://www.youtube.com/watch?v=bVzppWSIFU0
Another way is to reinvent the sewage treatment plant. With this vision of Bill, Janicki Bioenergy designed and built the Janicki Omniprocessor. Firstly, the machine is fed with sewer sludge which is boiled at a high temperature of approximately 1000 degree Celsius inside a large tube called the dryer. Here, in the boiling process, the water vapour is separated from the solids. The dried solids are then fed into the fire which can develop high pressure, high temperature steam. Steam, when taken into the steam engine drives the generator which in turn creates electricity. The electricity, apart from being used to run the Omniprocessor, can also be supplied to the community. Now the water vapour created in the boiling process is run through the cleaning system that uses a cyclone and several filters to remove harmful particles. Further condensation leads to clean drinking water. “The water tasted as good as I’ve had out of a bottle.” Gates continued. “And having studied the engineering behind it, I would happily drink it every day. It’s that safe."
One Omniprocessor is designed to continually provide water for upto 100,000 people. Local entrepreneurs will run the processors, collecting sludge to produce the water and energy. The first machine will be tested in Dakar, Senegal. Gates Foundation is working towards the goal to make the processors cheap enough that entrepreneurs in low and middle income countries will want to invest in them and start profitable waste-treatment. India is being seen as a great prospect to install the Omniprocessor where lots of entrepreneurs would own and operate the processors, as well as the companies with the skill to manufacture many parts.
The whole business model associated with it opens up various doors and opportunities. “Now, waste would turn into a commodity with its real value in the marketplace”, states Gates. In places where fresh water is hard to come by, water from poop is the way ahead.


Thursday 5 February 2015

The wonder material

Curiosity driven research-that’s what led to the inception of the wonder material on a serendipitous evening in 2002, when Dr. Andre Geim was pondering about Carbon. He contemplated about how ultra thin layers of Carbon might behave under experimental conditions. Unequivocally, Graphite was the most favorable material to work with, but the general fashion to isolate extremely thin layers would overheat the material, ultimately destroying it. Geim’s “scotch-tape” technique would go on to become renowned for isolating the world’s two dimensional material: a layer of carbon only an atom thick which under an atomic microscope, resembled a flat lattice of hexagons linked in a honeycomb pattern. This was the birth of Graphene.


Soon after, Dr. Andre Geim and Konstantin Novoselov started tinkering with Graphene. Over the next couple of years, a series of experiments revealed some stupefying properties of the material. Its unique structure lets electrons flow unfettered through the lattice at phenomenal speeds. They found out that Graphene would be able to conduct 1000 times more electricity that Copper.  The elfin material also exhibited field effect (the response that some materials show when placed near an electric field, which allows scientists to control the conductivity. Field effect-one of the defining characteristics of silicon, used in computer chips). This hinted that Graphene could substitute Silicon in the future.
In October, 2004, their paper, “Electric Field Effect in Atomically Thin Carbon Films,” was published in Science, and it astonished scientists. Youngjoon Gil, the executive vice-president of the Samsung Advanced Institute of Technology stated: “It was as if science fiction had become reality.” Six years later in 2010, Geim and Novoselov were awarded the Nobel Prize in Physics.
 James Tour, a research worker at Rice University stated that “mobility” with which electronic information can flow across graphene’s surface is the most tantalizing of Graphene’s properties described in Geim and Novoselov’s paper. “The slow step in our computers is moving information from point A to point B,” Tour said. “Now you’ve taken the slow step, the biggest hurdle in silicon electronics, and you’ve introduced a new material and—boom! All of a sudden, you’re increasing speed not by a factor of ten but by a factor of a hundred, possibly even more.” This has given the much needed boost to the semiconductor industry which has been slogging to keep up with Moore’s law devised by Gordon Moore (Co-founder of Intel). He predicted that every two years the density and effectiveness of computer chips would double. Engineers have been able to keep up with Moore’s law for five decades. But there’s a limit. Shrinking the chip too much, would move its transistors too close together, and silicon stops working. Soon, silicon chips may no longer be able to keep pace with Moore’s Law. Graphene, could offer a solution.
But it’s not just the computer and electronics industry that will be beneficiaries of a possible Graphene revolution.
Tour has sold patents for a graphene-infused paint whose conductivity may abet in removing ice from helicopter blades, fluids to improve efficiency of oil drills, and graphene-based materials to make the inflatable slides and life rafts used in airplanes. He pointed out that it is the only substance on the earth which is entirely impermeable to gas and it barely weighs anything. Lighter rafts and slides would help airline companies save millions of dollars a year on fuel.
A certain Graphene-based gel is being experimented with as a scaffold for spinal-cord injuries. Instead of just having a nonfunctional scaffold material, having something that’s electrically conductive helps the nerve cells to communicate electrically and connect with each other. This has been successfully tested on lab rats whose hind legs had been paralyzed. Bionic devices that allow paraplegics to reuse their limbs may not be science fiction for too long.
When oxygen and hydrogen molecules were bonded to Graphene, Graphene oxide came into existence-something which may solve our problems of radioactive waste disposal, as Graphene Oxide binds with the radioactive materials, forming a sludge that can be scooped away without much ado.
Scientists at MIT are developing a graphene filter covered with holes so tiny that they will only allow water to pass and will keep the salt out. Desalinization of salt water may have never been so simple had it not been for Graphene.
In Marvel Comics' Superior Ironman #2, the armored avenger-Tony Stark adorned an all-white armour instead of the familiar red and gold. Significantly, this new Ironman suit has no faceplate. 
Well, not exactly. As it is only one atom thick, graphene passes 97% of visible light, making it more transparent than most glasses, so we can indeed see his face through this thin, carbon "faceplate."


And maybe, just maybe, we may see our armies wearing these rather attractive suits made of Graphene. But could it be effective in protecting our soldiers? Experiments have proved that thin multi-layers of graphene, no more than a hundred atoms thick, are indeed ten times more "bullet-proof" than steel.
Wow. One material with myriad applications and I have barely been able to write about anything.
Adjectives that can be used for Graphene: elfin, wondrous, sublime, stupendous.... I could probably go on forever.
Graphene may soon spell the beginning of another technological and industrial revolution. Well, the future looks bright for all of us and especially for those who slavishly devote themselves to technology.
So, let us all bask in the glory of “THE WONDER MATERIAL”.








Wednesday 4 February 2015

The Internet of Things: Key to a techno centric community

The first time I heard the term ‘The Internet of Things’ (IoT), I thought I had made a mistake and somehow missed out a portion of a sentence that talked about the internet and an unrelated thing. It turned out to be a good thing though, as the term stayed in my head till I could find a way to figure it out.

It is important to understand what IoT is all about. Considering how vast its scope is, everyone can look at it differently. Kevin Ashton, Cofounder and Executive Director of the Auto-ID Center at MIT, first mentioned the Internet of Things in a presentation he made to Procter & Gamble. He has explained the concept and potential of IoT in a very simple, effective way:
“Today computers -- and, therefore, the Internet -- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code.

The problem is, people have limited time, attention and accuracy -- all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things -- using data they gathered without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.”
For IoT to function, everything (animals, people, appliances etc.) has to be provided with a unique identifier and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. IPv4 is the Internet Protocol that is used to route most of the traffic on the Internet. The IPv4 address field is a 32 bit field and that brings with it a certain set of limitations. There are nearly 4.3 billion IP addresses available in this space and they are almost exhausted now. IPv6 was developed in the late 90s by the Internet Engineering Task Force (IETF) to address this issue. This scheme has more than 7.9×10^28 times as many addresses. This would easily tackle the growing demand of Internet connected devices. This enormous increase in address space is an important factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, humans could easily assign an IP address to every "thing" on the planet.

It is evident that the Internet of Things is a very complex and large scale process. The lack of address space is just one issue which has been resolved. Security and privacy issues along with the methods of indexing and storing humongous amounts of data need to be tackled.
Nonetheless, the implication is clear. We are heading towards a world where your environment knows you as much as you know it. Home automation, Google cabs that know your urgent appointments and a grocery store that knows your fridge is out of milk are not out of a fiction novel anymore. They are out there in the world you and I live in. The next wave of technological revolution is just around the corner and we are lucky to be a part of it. The Internet of Things will play a key role in paving the way to achieving a perfect environment. We, as a techno centric community, shall realize our goals through the path lit up bright and clear by the Internet of Things!

Hues of a Technocentric Community

There’s a reason that those with orthodox inclinations have fallen far behind the liberals or the ‘Techno Centric’ on the technology front. We read Wired or Digit or any gadget blog and note how definitively liberal the publications are—pro-science, pro-progress and pro-net neutrality.
We are not primitive anymore, we are not confined to ‘just us’ anymore. We have become aware of the environment, we are wary of the mistakes that our ancestors have made. We know about the Global Climatic Shift (Science stuff), and we read upon Stem Cell Research (more of that science stuff). We are a new breed of technical euphoria; we are not the ones who sit in their shells, oblivious to the happenings of the outer world.



We have become better; we have adapted to the environment and adapted the environment to ourselves at the same time. We spiritedly dream about new technologies and make efforts to accentuate these dreams. We have turned to solar energy rather than the conventional forms of energy. We prefer walking over driving. We adapt to the Global Positioning System and make it better with crowd sourced information. We make our own ways, and our avenues of fun are not limited to interference but extend to perseverance and consistency.
We work to make our surroundings better, to make our neighborhoods better. We make correct political decisions, we are a part of the world family. Every information sparks a shard of brilliance in our minds. We are not dull, we are not static. Instead we have become volatile and all for the right reasons.
We work for a better future; we take a tip of the brink and turn it into wonders. We have become much more than we used to be.
We have become more efficient, we have become precariously more arduous to a common cause. We use the technicalities and its red flags carefully to expedite the meaningful and attenuate the noise. We have cut on the incessant chatter, and made it better for the information flow. We process ready data rather than raw. We are making headways in every field we pry.
We are a global family.
We are a part of the Techno-Centric Community.

Monday 2 February 2015

On an epic journey of Science: Interstellar 1

Whether you actually agree with the science or not, Interstellar is generally considered a masterpiece. A visual treat with truly great acting and story. But that is not all Interstellar is. The visual treatment of the blackhole isn’t made out of pure imagination, though the depiction is unlike anything ever seen. The truth is it’s due to interstellar, this visual form of Interstellar was discovered. The movie actually lended itself in making of two research papers by Caltech physicist Kip Thorne one for the astrophysicist community and one for the computer graphics community, who worked as a scientific consultant for the film. I will divide this into two parts – one explaining the science used in wormholes, and the other in the blackhole.

Let’s begin with the wormhole used in the movie for interstellar travel. Before Interstellar, any and all movies depicted a wormhole as a flat circular hole in space. But under the supervision on Kip Thorne, Christopher Nolan seeked to correct that. It has always been theorized that a wormhole is a spherical hole in space-time. It is because the three dimensional universe of ours is connected by extending a singularity on both sides through the fourth dimensional space called bulk, thus any tear connecting two points in our universe have to be a spherical hole. Let’s understand this by simple graphics, of course it is impossible to represent a four dimensional space here so this graphic would use a simple trick of collapsing a dimension like we do in our eg sheets.


To understand how two singularities connect two points in space time, we must first understand how a singularity works. According to Einstein at a point in space, the presence of mass means presence of gravity. Gravity effects the local space time of its surrounding region. So when infinite density is present a point it extends the local space into the bulk, as if it were made of rubber sheet giving it a cone like projection with the tip being the singularity where the gravity and mass become infinite and time almost doesn’t exist. Such infinite mass is achieved naturally in the universe when a star collapses into itself where its mass gets concentrated in almost negligible volume, giving it almost infinite density. Let’s understand this now.


If the universal plane was considered folded, and the two singularities were extended into each other, they will form a tunnel through bulk and a shortcut through space time will be created and anyone in the plane can fall through one opening of the tunnel to reach the other end quickly. So this can be viewed as this.


Now we were understanding the model of a wormhole with one dimension collapsed with our universe visualized as 2D plane, and the opening mouth of the wormhole as a circular flat hole. But since the universe is actually three-dimensional, the opening of the wormhole would also be three dimensional and would be a spherical hole in the space. Exactly as depicted by Interstellar.
For the computer graphics team behind Interstellar, this proved to be a problem, to depict a spherical hole out of pure imagination, so the visual effects supervisor of the Interstellar team asked for help from Kip Thorne not wanting to compromise the accuracy of depiction of the phenomenon in the film, who provided him with general equations which would help the team trace behavior of light rays around a wormhole.

The first thing they made out was that light wouldn’t behave classically around a wormhole, which is it won’t travel in a straight line. Any of the rendering software available at the time wasn’t able to do so, so the CGI team had to write a completely new renderer which would be able to do so based on the equations provided by Thorne, and then rendered the wormhole. The result turned out to be nothing like what anyone could visualize. The wormhole was like a crystal ball reflecting the universe, a spherical hole in spacetime. The reflection of space if the wormhole was viewed in person would be of the space at the other end of the wormhole. This is the most accurate depiction of wormhole ever and gave new insights into the phenomenon to Kip Thorne who helped design it.
The wormhole depicted in the film is 2.5 miles in diameter and connects two points in space nearly ten billion light years apart. The most interesting part is that it accurately said to be placed there by someone, i.e. it’s not a natural phenomenon because such a merger of two singularities in the bulk is not possible without some external force which is in this case humans from far future and this information is neatly integrated into the story.


I’ll explain the same beautiful and accurate science behind the science of the blackhole Gargantua and the relative passage of time depicted in the movie in the next part. Till then watch this space for other great articles. 


Credits:
The Science of Interstellar - Kip Thorne.
Wrinkles in Space Time, The Warped astrophysics of Interstellar – Adam Rogers [Wired.com]
The Science of ‘Interstellar’ Explained – [Space.com]