IS 21 DECEMBER 2012 END OF THE WORLD ?




IS THIS THE END 


 Is this possible ? The Maya calender says it

The Maya used several calendars simultaneously. One of them called the "long count", is a continuous record of days from a zero date that correlates to Aug. 13, 3114 BC, and is more According to maya's celender when the both the wheel of the maya celender will complete its rotation the celender would end and that would be the END OF THE WORLD




So Who Are Maya People

One of the most amazing cultures of the New World inhabited a region encompassing today's Guatemala, Belize, Honduras and El Salvador, and parts of southern Mexico-the states of Yucatan just west of Cancun Mexico, Campeche, Quintana Roo, Tabasco and Chiapas. Today this area is occupied by the descendants of the ancient Maya, the vast majority of whom have to some extent preserved their cultural heritage and still speak the Mayan language.

By 5000 BC, the Maya had settled along the Caribbean just southwest of Punta Cana Dominican Republic and along the Pacific coasts, in fishing communities. By 2000 BC the Maya had also moved inland and adopted agriculture for their subsistence. Maize and beans formed the Maya diet then as today, although many other foodstuffs--manioc, squash, tomatoes, peppers, fruit, and game-were supplements.


To study the Maya their development has been divided into periods, the earlier Maya culture is called Formative or Pre Classic (2000 BC-AD 300), the Classic period goes from AD 300 to AD 900, and subsequent civilization is known as Post Classic (AD 900-conquest).

Now we know that the Maya began to develop intensive agriculture and sophisticated water management during the Middle Pre Classic (900-300 BC), surely to help support the population explosion of the Late Pre Classic (300 BC-AD 300). During this same period, writing was invented in Mesoamerica, and the Maya began to use it during the Late Pre Classic.

The Maya were the first people of the New World to keep historical records, and even if writing in the New World did not originate among the Maya, they developed and used it extensively. The Maya wrote a mixed script, with ideographic and phonetic elements.

Most of their writing survived on stelae, stone monuments very common in the Maya cities, they recount mostly civil events and record their calendric and astronomical knowledge.

Maya pottery gives testimony of their religion and elaborate mythology. Four Post Classic Maya screenfold manuscripts, called codices have survived, They reveal Maya calendric and astronomical calculations, as well as rituals, offerings, and auguries for the year.

The Maya used several calendars simultaneously. One of them called the "long count", is a continuous record of days from a zero date that correlates to Aug. 13, 3114 BC, and is more precise than the Julian calendar revised in Europe in 1582. The Maya were great astronomers and kept track of the solar and lunar years, eclipses and the cycles of visible planets. To carry out their calendric and astronomical calculations they developed a sophisticated mathematical system where units are written with dots and bars are used to represent five units. They discovered and used the zero as well as a vigesimal positioning system, similar to the decimal positioning system we use today.

During the Classic period monumental architecture and stelae with historical records were erected, on these monuments the Maya rulers reigned as divine kings. The Maya thrived during the Late Classic (AD 550-900), and art, architecture, writing, commerce and intensive agricultural practices flourished all through the Maya lands. More than 2 million people may have lived in the area, and it is estimated that Tikal, the largest center, had a population of 75,000-100,000.

However, the Classic Maya cities did not survive into the 10th century. It seems that the system of rule that had served them well for centuries failed. Probably faced with famine, foreign invasion, chronic warfare, adverse climatic conditions and perhaps disease, the Classic period ended in what is called the Classic Maya collapse. The Maya continued to live in both highlands and lowlands but the period of their greatest splendor was over. In the northern Yucatan peninsula, civilization continued at Uxmal and the surrounding area. The Post Classic saw the splendor of Chichen Itza. Chichen Itza was probably abandoned by the 12th century. Trading towns survived along the Caribbean coast. Tulum, a spectacular walled city and a major trading town, located above the coastline of what is now the state of Quintana Roo on Mexico's Caribbean seashore is a great example of these. This city when seen from a Spanish ship was compared to Seville.

The Maya of Yucatan finally broke up into small states and the Spanish took advantage of this division to take control in 1542. In that year, after having been fought back during 15 years, they were able to establish their own capital at Mérida, (in today's State of Yucatan, Mexico) on the site of a Maya city called Tiho. The last of the Maya kingdoms, Tayasal, in Lake Peten Itza (Guatemala), was conquered by the Spanish in 1697, 155 years after the conquest of Merida..

The Calendar
   Calendar (L. calendarium)

1.- A catalog that registers all the days of a year, distributed in weeks and months, with astronomical data, such as time of sunrise and sunset, the moon phases, or with religious information such as patron saints and festivities.

2.- A time division system, all of the world's cultures have their calendars initially lunar and afterwards lunar-solar. The Chaldeans and Babylonians passed their calendric knowledge to the Egyptians, these in turn to the Greeks and these finally to the Romans who adopted it for their common use.

From the beginning of civilization there has been a very close link between astrology and the development of the calendar. The importance of this connection is evident considering the need to determine the times for the most basic functions of early societies such as agriculture and the celebration of religious events.

The most ancient calendars were probably based on lunar observation since the Moon's phases take place in an easily observed interval. It is most likely that the sighting of the crescent Moon marked a new time period. It was observed that recurrent Moon's phases were about 29 days apart. This gave birth to the first lunar calendars containing 29-30 days per time period (month), but since the sum of twelve or thirteen months differ from the length of a tropical year this calendar was not completely suitable for agricultural practices.

Due to this difference and in order to keep in step with the Sun, the lunar-solar calendars were born, adding a complementary time period to the total of days in the Moon's cycles so as to equal the solar year. Many of these calendars, with variations, existed through time in different areas of the world. In pre-Columbian America the Maya and Aztec calendars were very important. They are remarkably accurate and are made of 18 months of 20 days plus five supplemental days.

Later the solar calendars came to be, for example the Julian calendar which was instituted in Rome by Julius Caesar in 46 BC. This calendar set the year's length at 365 days and added one day to the year every four years. Pope Gregory XIII modified this calendar in 1582. And even though the Gregorian calendar is a solar calendar in the sense that it does not take into consideration the Moon in its calculations, it does contain rules for determining Easter and other religious holidays which are based on both the Sun and the Moon. The Gregorian calendar is used today in most of the world, it is divided into the twelve months we all know.

Historically people have sensed the need to have a fixed point to start their time calculations. In order to do this generally the starting point has been determined either by a historical event (the birth of Jesus) or by a hypothetical event (the date of the world's creation). Of all known cultures the Maya seem to have been the first to discover the need for such a date, using probably an astronomically significant or a hypothetical event they placed at 3114 BC.

SCIENTISTS SAY 21 DECEMBER 2012 IS NOT THE END OF WORLD BUT IT'S THE END OF MAYA CALENDER

SO DO WE HAVE TO WORRY ? WILL THIS BE AN END? 

THE ONLY ANSWER TO IT IS TO WAIT FOR 21 DECEMBER 2012



SCIENTISTS FOUND GOD





  The Large Hadron Collider




Our understanding of the Universe is about to change

The Large Hadron Collider (LHC) is a gigantic scientific instrument near Geneva, where it spans the border between Switzerland and France about 100m underground. It is a particle accelerator used by physicists to study the smallest known particles – the fundamental building blocks of all things. It will revolutionise our understanding, from the minuscule world deep within atoms to the vastness of the Universe.


Two beams of subatomic particles called "hadrons" – either protons or lead ions – travel in opposite directions inside the circular accelerator, gaining energy with every lap. Physicists use the LHC to recreate the conditions just after the Big Bang, by colliding the two beams head-on at very high energy. Teams of physicists from around the world then analyse the particles created in the collisions using special detectors in a number of experiments dedicated to the LHC.


There are many theories as to what will result from these collisions. For decades, the Standard Model of particle physics has served physicists well as a means of understanding the fundamental laws of Nature, but it does not tell the whole story. Only experimental data using the high energies reached by the LHC can push knowledge forward, challenging those who seek confirmation of established knowledge, and those who dare to dream beyond the paradigm.






Why is there no more antimatter?

We live in a world of matter – everything in the Universe, including ourselves, is made of matter. Antimatter is like a twin version of matter, but with opposite electric charge. At the birth of the Universe, equal amounts of matter and antimatter should have been produced in the Big Bang. But when matter and antimatter particles meet, they annihilate each other, transforming into energy. Somehow, a tiny fraction of matter must have survived to form the Universe we live in today, with hardly any antimatter left. Why does Nature appear to have this bias for matter over antimatter?


The LHCb experiment will be looking for differences between matter and antimatter to help answer this question. Previous experiments have already observed a tiny behavioural difference, but what has been seen so far is not nearly enough to account for the apparent matter–antimatter imbalance in the Universe.


Secrets of the Big Bang

What was matter like within the first second of the Universe’s life?

Matter, from which everything in the Universe is made, is believed to have originated from a dense and hot cocktail of fundamental particles. Today, the ordinary matter of the Universe is made of atoms, which contain a nucleus composed of protons and neutrons, which in turn are made of quarks bound together by other particles called gluons. The bond is very strong, but in the very early Universe conditions would have been too hot and energetic for the gluons to hold the quarks together. Instead, it seems likely that during the first microseconds after the Big Bang the Universe would have contained a very hot and dense mixture of quarks and gluons called quark–gluon plasma.


The ALICE experiment will use the LHC to recreate conditions similar to those just after the Big Bang, in particular to analyse the properties of the quark-gluon plasma.


Hidden worlds…


Do extra dimensions of space really exist?

Einstein showed that the three dimensions of space are related to time. Subsequent theories propose that further hidden dimensions of space may exist; for example, string theory implies that there are additional spatial dimensions yet to be observed. These may become detectable at very high energies, so data from all the detectors will be carefully analysed to look for signs of extra dimensions.




Why the LHC

A few unanswered questions...


The LHC was built to help scientists to answer key unresolved questions in particle physics. The unprecedented energy it achieves may even reveal some unexpected results that no one has ever thought of!


For the past few decades, physicists have been able to describe with increasing detail the fundamental particles that make up the Universe and the interactions between them. This understanding is encapsulated in the Standard Model of particle physics, but it contains gaps and cannot tell us the whole story. To fill in the missing knowledge requires experimental data, and the next big step to achieving this is with LHC.

Newton's unfinished business

What is mass?

What is the origin of mass? Why do tiny particles weigh the amount they do? Why do some particles have no mass at all? At present, there are no established answers to these questions. The most likely explanation may be found in the Higgs boson, a key undiscovered particle that is essential for the Standard Model to work. First hypothesised in 1964, it has yet to be observed.

The ATLAS and CMS experiments will be actively searching for signs of this elusive particle.


An invisible problem...




What is 96% of the universe made of?

Everything we see in the Universe, from an ant to a galaxy, is made up of ordinary particles. These are collectively referred to as matter, forming 4% of the Universe. Dark matter and dark energy are believed to make up the remaining proportion, but they are incredibly difficult to detect and study, other than through the gravitational forces they exert. Investigating the nature of dark matter and dark energy is one of the biggest challenges today in the fields of particle physics and cosmology.

The ATLAS and CMS experiments will look for supersymmetric particles to test a likely hypothesis for the make-up of dark matter.


How the LHC works



The LHC, the world’s largest and most powerful particle accelerator, is the latest addition to CERN’s accelerator complex. It mainly consists of a 27-kilometre ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way.

Inside the accelerator, two beams of particles travel at close to the speed of light with very high energies before colliding with one another. The beams travel in opposite directions in separate beam pipes – two tubes kept at ultrahigh vacuum. They are guided around the accelerator ring by a strong magnetic field, achieved using superconducting electromagnets. These are built from coils of special electric cable that operates in a superconducting state, efficiently conducting electricity without resistance or loss of energy. This requires chilling the magnets to about ‑271°C – a temperature colder than outer space. For this reason, much of the accelerator is connected to a distribution system of liquid helium, which cools the magnets, as well as to other supply services.



Thousands of magnets of different varieties and sizes are used to direct the beams around the accelerator. These include 1232 dipole magnets of 15m length which are used to bend the beams, and 392 quadrupole magnets, each 5–7m long, to focus the beams. Just prior to collision, another type of magnet is used to "squeeze" the particles closer together to increase the chances of collisions. The particles are so tiny that the task of making them collide is akin to firing needles from two positions 10km apart with such precision that they meet halfway!



All the controls for the accelerator, its services and technical infrastructure are housed under one roof at the CERN Control Centre. From here, the beams inside the LHC are made to collide at four locations around the accelerator ring, corresponding to the positions of the particle detectors.






Heavy-ion physics at the LHC

In the LHC heavy-ion programme, beams of heavy nuclei ("ions") collide at energies up to 30 times higher than in previous laboratory experiments. In these heavy-ion collisions, matter is heated to more than 100,000 times the temperature at the centre of the Sun, reaching conditions that existed in the first microseconds after the Big Bang. The aim of the heavy-ion programme at the LHC is to produce this matter at the highest temperatures and densities ever studied in the laboratory, and to investigate its properties in detail. This is expected to lead to basic new insights into the nature of the strong interaction between fundamental particles.

The strong interaction is the fundamental force that binds Nature's elementary particles, called quarks, into bigger objects such as protons and neutrons, which are themselves the building blocks of the atomic elements. Much is known today about the mechanism with which the elementary force-carriers of the strong interaction, the gluons, bind quarks together into protons and neutrons. However, two aspects of the strong interaction remain particularly intriguing.

First, no quark has ever been observed in isolation: quarks and gluons seem to be confined permanently inside composite particles, such as protons and neutrons. Second, protons and neutrons contain three quarks, but the mass of these three quarks accounts for only one percent of the total mass of a proton or neutron. So while the Higgs mechanism could give rise to the masses of the individual quarks, it cannot account for most of the mass of ordinary matter.

The current theory of strong interactions, called quantum chromodynamics, predicts that at very high temperatures, quarks and gluons are deconfined and can exist freely in a new state of matter known as the quark-gluon plasma. Theory also predicts that at the same temperature, the mechanism that is responsible for giving composite particles most of their mass ceases to act.

In the LHC heavy-ion programme, three experiments – ALICE, ATLAS and CMS – aim to produce and study this extreme, high-temperature phase of matter and provide novel access to the question of how most of the mass of visible matter in the Universe was generated in the first microseconds after the Big Bang.









X-51 WaveRider




 NEW YORK TO LONDON IN JUST 45 MINUTE IN THIS HYPER SONIC VEHICLE


THE   X-51 Wave Rider




The Boeing X-51 (also known as X-51 WaveRider) is an unmanned scramjet demonstration aircraft for hypersonic (Mach 6, approximately 4,000 miles per hour (6,400 km/h) at altitude) flight testing. It successfully completed its first free-flight on 26 May 2010 and also achieved the longest duration flight at speeds over Mach 5.[1]
The X-51 is named "WaveRider" because it uses its shockwaves to add lift. The program is run as a cooperative effort of the United States Air Force, DARPA, NASA, Boeing, and Pratt & Whitney Rocketdyne. The program is managed by the Propulsion Directorate within the United States Air Force Research Laboratory (AFRL).[2] The X-51 had its first captive flight attached to a B-52 in December 2009.





In the 1990s, the Air Force Research Laboratory (AFRL) began the HyTECH program for hypersonic propulsion. Pratt & Whitney received a contract from the AFRL to develop a hydrocarbon-fueled scramjet engine which led to the development of the SJX61 engine. The SJX61 engine was originally meant for the NASA X-43, which was eventually canceled. The engine was applied to the AFRL's Scramjet Engine Demonstrator program in late 2003.[3] The scramjet flight test vehicle was designated X-51 on 27 September 2005.[4]


X-51A under the wing of a B-52 at Edwards Air Force Base, July 2009
In flight demonstrations, the X-51 is carried by a B-52 to an altitude of about 50,000 feet (15.2 kilometers) and then released over the Pacific Ocean.[5] The X-51 is initially propelled by an MGM-140 ATACMS solid rocket booster to approximately Mach 4.5. The booster is then jettisoned and the vehicle's Pratt & Whitney Rocketdyne SJY61 scramjet accelerates it to a top flight speed near Mach 6.[6][7] The X-51 uses JP-7 fuel for the SJY61 scramjet, carrying some 270 lb (120 kg) onboard.[8]
Previously, DARPA viewed X-51 as a stepping stone to Blackswift,[9] a planned hypersonic demonstrator which was canceled in October 2008.[10]



Initial testing


SJX61-2 engine successfully completes ground tests simulating Mach 5 flight conditions.
Ground tests of the X-51A began in late 2006. A preliminary version of the X-51, the "Ground Demonstrator Engine No. 2", completed wind tunnel tests at the NASA Langley Research Center on 27 July 2006.[11] Testing continued there until a simulated X-51 flight at Mach 5 was successfully completed on 30 April 2007.[12][13] The testing is intended to observe acceleration between Mach 4 and Mach 6 and to demonstrate that hypersonic thrust "isn't just luck".[14][15] Four test flights were initially planned for 2009, but the first captive flight of the X-51A on a B-52 was not conducted until 9 December 2009,[16][17] with further captive flights in early 2010.[18][19]
[edit]Powered flight tests
The first powered flight of the X-51 was planned for 25 May 2010, but the presence of a cargo ship traveling through a portion of the Naval Air Station Point Mugu Sea Range caused a 24 hour postponement.[20] The X-51 completed its first powered flight successfully on 26 May 2010. It reached a speed of Mach 5, an altitude of 70,000 feet (21,000 m) and flew for over 200 seconds; it did not meet the planned 300 second flight duration, however.[1][21] The flight had the longest scramjet burn time of 140 seconds. The X-43 had the previous longest flight burn time of 12 seconds,[21][22][23] while setting a new speed record of Mach 9.8 (12,144 km/h, 7,546 mph).
Three more test flights were planned and will use the same flight trajectory.[22] Boeing proposed to the Air Force Research Laboratory that two test flights be added in order to increase the total to six, with flights taking place at four to six week intervals, assuming there are no failures.[24]
The second test flight was initially scheduled for 24 March 2011,[25] but was not conducted due to unfavorable test conditions.[26] The flight took place on 13 June 2011. However, the flight over the Pacific Ocean ended early due to an inlet unstart event after being boosted to Mach 5 speed. The flight data from the test is being investigated.[27] A B-52 released the X-51 at an approximate altitude of 50,000 feet. The X-51’s scramjet engine lit on ethylene, but did not properly transition to JP-7 fuel operation.[28]
The third test flight took place on 14 August 2012.[29] The X-51 was to make a 300 second, or 5 minute, experimental flight at speeds of five times the speed of sound, more than 3,600 mph.[30] After separating from its rocket booster, the craft lost control and crashed into the Pacific.[31] A statement by the Air Force Research Laboratory indicates a failure of the tail control surface as the cause.




General characteristics
Crew: None
Length: 25 ft in (7.62 m)
Empty weight: 4,000 lb (1,814 kg)
Performance
Maximum speed: Mach 6+ (3,600+ mph; 5,800+ km/h)
Range: 460 miles (740 km)
Service ceiling: 70,000 ft (21,300 m)





 BUT THE


X-51 Waverider: Hypersonic jet ambitions fall short



The dreams of being able to fly from New York to London in under an hour are once again put on hold, as the latest effort to fly at over five times the speed of sound ends in failure.

Related

Will we fly at hypersonic speeds?
Work has begun on a passenger aircraft that could fly from Europe to Australia in four hours. Will it ever become a reality?
When Chuck Yeager broke the sound barrier in 1947, it ushered in a new era of high-speed air travel. Now, engineers are trying to make the next leap to craft that can fly more than five times the speed of sound. But it is proving difficult.  

The most recent test flight ended in failure on Tuesday when a faulty control fin caused the US Air Force X-51 Waverider jet to lose control and crash into the Pacific Ocean.

The missile-like vehicle -  powered by a supersonic combustion engine known as a scramjet - was dropped from a B-52 bomber off the coast of southern California  It was supposed to be propelled by a solid-rocket booster, then ignite its scramjet engine to reach speeds of up to Mach 6. In the end, the test flight lasted just 31 seconds.

It  was the last of three planned tests of the X-51, designed to demonstrate the feasibility of a hypersonic missile. It now joins a long list of failed hypersonic flights that show just how difficult it is to reach these so-called hypersonic speeds, usually defined as Mach 5 or above.

‘Rush to failure’

The appeal of hypersonics is simple: imagine an aircraft that can travel from New York to London in under an hour, or a missile that can reach anywhere in the world in less than two hours. But military experts have long cautioned that missiles, rather than reusable aircraft, are likely to be what will be developed first.

An air-to-air missile or an even an air-to-ground missile is the most likely near-term application for hypersonics, says Werner Dahm, director of Security and Defense Systems Initiative at Arizona State University, and a former US Air Force chief scientist. “It’s technologically much more achievable in the near to midterm,” he says.

Engineers have taken a variety of approaches to hypersonic aircraft over the years: the X-51 is called a WaveRider because it literally rides its own shock waves, and is powered by a scramjet, a variation of the traditional ramjet engine, where the exhaust from fuel combustion is compressed as it goes through the engine. But the Pentagon has also looked at a number of rocket-boosted gliders, and more complicated reusable aircraft powered by combination turbine and ramjet engines, among other designs.

But even building a test vehicle has proved difficult: the first X-51 flight test was cut short due to a flight anomaly, and the second test failed after the vehicle didn’t separate from its rocket, as planned. Yesterday’s failure is likely to raise even more questions about the future of hypersonic efforts. “Hypersonics test and evaluation is extremely unforgiving of miscalculation and error,” says Richard Hallion, a former senior advisor to the Air Force, and a leading expert on hypersonics.

Indeed, hypersonics has a mixed history, littered with the bodies of cancelled test vehicles, particularly those that have proved too ambitious. Hallion says many hypersonic research programs have suffered from a “rush to failure”, where flight vehicles have been flown too early, and then failed not because of an inherent problem in the vehicle, but because of a simple engineering mistake.

Most memorable, perhaps, was the 1980s-era National Aero-Space Plane, which was touted by President Ronald Reagan as a new Orient Express that could travel from Washington, DC to Tokyo in two hours, reaching speeds of up to 25 times the speed of sound. But the test aircraft, dubbed the X-30, proved too costly and vastly too complicated for the technology at the time.  

More recently, the Defense Advanced Research Projects Agency (Darpa), the research and development arm of the Pentagon, tried to revive the idea of a reusable hypersonic aircraft through a program called Blackswift, though it was soon canceled, after Congress questioned the ability to engineer such an aircraft, given previous failures.

INSIDE A ELECTRIC MOTOR




INSIDE A ELECTRIC MOTOR

Have you ever wondered what goes on inside an electric motor? We've taken apart a simple electric motor that you would typically find in a toy to explain how all the parts work. See the next page to get started





From the outside, you can see the steel can that forms the body of the motor, an axle, a nylon end cap and two battery leads


The nylon end cap is held in place by two tabs that are part of the steel can. By bendin­g the tabs back, you can free the end cap and remove it.


Inside the end cap are the motor's brushes. These brushes transfer power from the battery to the commutator as the motor spins.





he final piece of any DC electric motor is the field magnet. The field magnet in this motor is formed by the can itself plus two curved permanent magnets.



Two piece of magnets can been seen inside the metallic cap


One end of each magnet rests against a slot cut into the can, and then the retaining clip presses against the other ends of both magnets. 




How the Mars Curiosity Rover Works





Move over, Spirit and Opportunity: There's a new Mars rover on the planet as of August 2012. With its six-wheel drive, rocker-bogie suspension system and mast-mounted cameras, it might resemble its venerable predecessors, but only in the way a pickup truck resembles a Humvee. We're talking about a nuclear-powered, laser-toting monster truck of science, complete with rocket pack -- a steal at $2.5 billion (tax, title, docking and freight fees included).
The Mars Science Laboratory, aka Curiosity, dominates the Mars rover showroom, stretching twice as long (about 10 feet, or 3 meters) and built five times as heavy (1,982 pounds, or 899 kilograms) as NASA's record-setting 2003 models, Spirit and Opportunity. It comes off-off-road ready, with no hubs to lock (and no one to lock them). Six 20-inch (51-centimeter) aluminum wheels tear over obstacles approaching 30 inches (75 centimeters) high and rack up 660 feet (200 meters) per day on Martian terrain.
Ladies and gentlemen, the 2011 Curiosity packs more gadgets than a Ronco warehouse -- everything from gear for collecting soil and powdered samples of rock, to sieves for prepping and sorting them, to onboard instruments for analyzing them. Curiosity's laser is a tunable spectrometer designed to identify organic (carbon-containing) compounds and determine the isotope ratios of key elements. Best of all, its tried-and-true nuclear power system, long used in satellites, spacecraft and lunar equipment flown aboard the Apollo missions, is guaranteed not to leave you stranded in a dust storm.
Yes indeed, NASA went back to the drawing board for this one, dreaming up a fractal-like arrangement to pack the finest selection of compact scientific accoutrements into the smallest space possible. But don't take our word for it: Ask Rob Manning, flight system chief engineer at Jet Propulsion Laboratory, who calls it "by far, the most complex thing we've ever built" [source: JPL].
No effort was spared for NASA's most ambitious rover to date. This workhorse will conduct more onboard scientific research, using a larger suite of laboratory instruments and sensors, than any previous Martian model. Order today, and NASA will deliver it to within 12 miles (20 kilometers) of your door (some limitations apply; door must be within 250-million-mile (402-million-kilometer) delivery area). Your rover will land with more precision and cover more rugged ground than any other, and it will have the best chance so far of capturing the history of water flow and the possibility of ancient habitable environments on Mars. Yes, if Motor Trend magazine had a category for space buggies, Curiosity would no doubt garner Rover of the Year.
Now, why don't you let us hold onto your keys while you take it for a test drive?



From Blueprint to Bullet
Years of testing, development and building-in fault tolerances culminated at 10:02 a.m. EST on Nov. 26, 2011, when the Mars Science Laboratory (MSL) launched from Cape Canaveral Air Force Station aboard an Atlas V rocket. It landed successfully on Mars at 1:32 a.m. EDT, Aug. 6, 2012.
Before loading Curiosity into its shell, engineers subjected the rover to a rigorous series of tests simulating both internal faults and external problems, punishments that included centrifuges, drop tests, pull tests, drive tests, load tests, stress tests and tests of shorting circuits [source: JPL].
Meanwhile, NASA had to decide where the new rover would explore, how it would get there and how the space agency could land it safely -- easier said than done.
Earth and Mars revolve around the sun at different rates -- 686.98 Earth days for Mars versus 365.26 for Earth -- which means their relative distance varies enormously. Reaching Mars on as little fuel as possible meant launching when the red planet passes closest to us [source: NASA]. This was no minor consideration: Mars swings out more than seven times as far from Earth at its farthest extreme (249.3 million miles, or 401.3 million kilometers) than at its nearest approach (34.6 million miles, or 55.7 million kilometers) [source: Williams].
Like a quarterback throwing a pass, the launch system aimed not for where Mars was, but for where it would be when the craft arrived. NASA threw that pass, and the rover-football reached its round and red receiver more than 250 days later, and touched down on Sunday, Aug. 6, 2012 (Eastern Daylight Time).
NASA did not "throw" MSL from Earth's surface, however; the agency launched it from planetary orbit. Here's how: Once the lifting vehicle reached space from Cape Canaveral, its nose cone, or fairing, opened like a clamshell and fell away, along with the rocket's first stage, which cut off and plummeted to the Atlantic Ocean. The second stage, a Centaur engine, then kicked in, placing the craft into a parking orbit. Once everything was properly lined up, the rocket kicked off a second burn, propelling the craft toward Mars.
About 44 minutes after launch, MSL separated from its rocket and began communicating with Earth. As it continued on its way, it made occasional planned course corrections.
Once it hit the Martian atmosphere, the fun really began.
THE GALE CRATER
Curiosity began its journey exploring Gale, an impact crater nestled between Mars's southern highlands and northern lowlands. Measuring 96 miles (154 kilometers) across, Gale sprawls over an area equivalent to Connecticut and Rhode Island combined.



A Noiseless, Patient Rover
Within Mars, rising higher than Mount Rainier towers above Seattle, stands a sediment mountain 3 miles (5 kilometers) high. Composed of layers of minerals and soils -- including clays and sulfates, which point to a watery history -- these layers will provide an invaluable map of Martian geological history [sources: Siceloff; Zubritsky].
Past water would have flowed toward and collected in Gale's lowlands, making it a likely repository for the remains of streams, pools and lakes, and therefore an ideal place to find evidence of Mars's past habitability.
Like Walt Whitman's "noiseless patient spider," Curiosity will one day soon stand isolated on a promontory, sending back data from which its mission controllers will decide "how to explore the vacant vast surrounding." Its spidery resemblance does not end with poetic license or even its spindly, jointed legs, however; it extends to the spiderlike way the rover landed on the Martian surface.
Before we unravel that, however, let's look at the rocket-assisted jump the craft made when it first reached Mars.
When the spacecraft carrying Curiosity swung into the Martian atmosphere 78 miles (125 kilometers) above the ground, it steered and braked through a series of S-curves like those used by the space shuttles. During the minutes before touchdown, at around 7 miles (11 kilometers) up, the craft popped a parachute to slow its 900 mph (1,448 kph) descent. It then ejected its heat shield from the bottom of the cone, creating an exit for Curiosity.
The rover, with its upper stage clamped to its back like a turtle shell, fell clear of the cone. A few moments later, the upper stage's rim-mounted retro rockets blasted to life, stabilizing the pair into a hovering position about 66 feet (20 meters) above the surface; from here, the upper stage acted as a sky crane, lowering Curiosity like a spider on silk. Once the rover was safely on the ground, its tether was cut, and Curiosity set off on its journey [sources: NASA; JPL].
Shortly before touchdown, the Mars Descent Imager took high-definition color video of the landing zone. This footage aided with landing and provided a bird's-eye-view of the exploration area for researchers and mission specialists back home. Another array of instruments, the Mars Science Laboratory Entry, Descent and Landing Instrument Suite, will measure atmospheric conditions and spacecraft performance. NASA will use this data when planning and designing future missions.
The novel landing system was more complicated, but also more precisely controlled, than any before, enabling mission planners to bull's-eye the long-desired target of Gale Crater. Landing within Curiosity's 12-mile (20-kilometer) target area within the crater would have been impossible for Spirit and Opportunity, which needed five times as much area when bouncing down in their space-age bubble wrap. This success opened up a slew of desirable sites, including steep-walled craters previously off-limits due to their tricky terrain.
Curiosity will also lay the groundwork for future missions, just as previous Mars jaunts made the new rover's expedition possible. Such missions could include scooping up rocks and flying them back home, or carrying out more far-reaching surface surveys, seeking evidence of Martian microbial life and its key chemical ingredients [source: NASA].
Now that we've landed safe and sound, let's take a look at what kind of equipment comes standard with the Mars Science Laboratory package.
Print Cite Feedback


HOW DOES A CAPACITOR WORKS





In a way, a capacitor is a little like a battery. Although they work in completely different ways, capacitors and batteries both store electrical energy. If you have read How Batteries Work, then you know that a battery has two terminals. Inside the battery, chemical reactions produce electrons on one terminal and absorb electrons on the other terminal. A capacitor is much simpler than a battery, as it can't produce new electrons -- it only stores them.


In this article, we'll learn exactly what a capacitor is, what it does and how it's used in electronics. We'll also look at the history of the capacitor and how several people helped shape its progress.
Inside the capacitor, the terminals connect to two metal plates separated by a non-conducting substance, or dielectric. You can easily make a capacitor from two pieces of aluminum foil and a piece of paper. It won't be a particularly good capacitor in terms of its storage capacity, but it will work.
In theory, the dielectric can be any non-conductive substance. However, for practical applications, specific materials are used that best suit the capacitor's function. Mica, ceramic, cellulose, porcelain, Mylar, Teflon and even air are some of the non-conductive materials used. The dielectric dictates what kind of capacitor it is and for what it is best suited. Depending on the size and type of dielectric, some capacitors are better for high frequency uses, while some are better for high voltage applications. Capacitors can be manufactured to serve any purpose, from the smallest plastic capacitor in your calculator, to an ultra capacitor that can power a commuter bus. NASA uses glass capacitors to help wake up the space shuttle's circuitry and help deploy space probes. Here are some of the various types of capacitors and how they are used.
Air - Often used in radio tuning circuits
Mylar - Most commonly used for timer circuits like clocks, alarms and counters
Glass - Good for high voltage applications
Ceramic - Used for high frequency purposes like antennas, X-ray and MRI machines
Super capacitor - Powers electric and hybrid cars
In the next section, we'll take a closer look at exactly how capacitors work.



 Capacitor Circuit

In an electronic circuit, a capacitor is shown like this:
When you connect a capacitor to a battery, here's what happens:

The plate on the capacitor that attaches to the negative terminal of the battery accepts electrons that the battery is producing.
The plate on the capacitor that attaches to the positive terminal of the battery loses electrons to the battery.


Once it's charged, the capacitor has the same voltage as the battery (1.5 volts on the battery means 1.5 volts on the capacitor). For a small capacitor, the capacity is small. But large capacitors can hold quite a bit of charge. You can find capacitors as big as soda cans that hold enough charge to light a flashlight bulb for a minute or more.
Even nature shows the capacitor at work in the form of lightning. One plate is the cloud, the other plate is the ground and the lightning is the charge releasing between these two "plates." Obviously, in a capacitor that large, you can hold a huge amount of charge!
Let's say you hook up a capacitor like this:


Here you have a battery, a light bulb and a capacitor. If the capacitor is pretty big, what you will notice is that, when you connect the battery, the light bulb will light up as current flows from the battery to the capacitor to charge it up. The bulb will get progressively dimmer and finally go out once the capacitor reaches its capacity. If you then remove the battery and replace it with a wire, current will flow from one plate of the capacitor to the other. The bulb will light initially and then dim as the capacitor discharges, until it is completely out.
In the next section, we'll learn more about capacitance and take a detailed look at the different ways that capacitors are used.


HOW DOES A QUANTAM DOT WORKS







The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.
Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.
Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.
You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory. Learn what this is in the next section.




Defining the Quantum Computer
The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Next, we'll look at some recent advancements in the field of quantum computing.
QUBIT CONTROL
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices.
Ion traps use optical or magnetic fields (or a combination of both) to trap ions.
Optical traps use light waves to trap and control particles.
Quantum dots are made of semiconductor material and are used to contain and manipulate electrons.
Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material.
Superconducting circuits allow electrons to flow with almost no resistance at very low temperatures.


How Acoustic Levitation Works







Unless you travel into the vacuum of space, sound is all around you every day. But most of the time, you probably don't think of it as a physical presence. You hear sounds; you don't touch them. The only exceptions may be loud nightclubs, cars with window-rattling speakers and ultrasound machines that pulverize kidney stones. But even then, you most likely don't think of what you feel as sound itself, but as the vibrations that sound creates in other objects.
The idea that something so intangible can lift objects can seem unbelievable, but it's a real phenomenon. Acoustic levitation takes advantage of the properties of sound to cause solids, liquids and heavy gases to float. The process can take place in normal or reduced gravity. In other words, sound can levitate objects on Earth or in gas-filled enclosures in space.
To understand how acoustic levitation works, you first need to know a little about gravity, air and sound. First, gravity is a force that causes objects to attract one another. The simplest way to understand gravity is through Isaac Newton's law of universal gravitation. This law states that every particle in the universe attracts every other particle. The more massive an object is, the more strongly it attracts other objects. The closer objects are, the more strongly they attract each other. An enormous object, like the Earth, easily attracts objects that are close to it, like apples hanging from trees. Scientists haven't decided exactly what causes this attraction, but they believe it exists everywhere in the universe.
Second, air is a fluid that behaves essentially the same way liquids do. Like liquids, air is made of microscopic particles that move in relation to one another. Air also moves like water does -- in fact, some aerodynamic tests take place underwater instead of in the air. The particles in gasses, like the ones that make up air, are simply farther apart and move faster than the particles in liquids.
Third, sound is a vibration that travels through a medium, like a gas, a liquid or a solid object. A sound's source is an object that moves or changes shape very rapidly. For example, if you strike a bell, the bell vibrates in the air. As one side of the bell moves out, it pushes the air molecules next to it, increasing the pressure in that region of the air. This area of higher pressure is a compression. As the side of the bell moves back in, it pulls the molecules apart, creating a lower-pressure region called a rarefaction. The bell then repeats the process, creating a repeating series of compressions and rarefactions. Each repetition is one wavelength of the sound wave.
The sound wave travels as the moving molecules push and pull the molecules around them. Each molecule moves the one next to it in turn. Without this movement of molecules, the sound could not travel, which is why there is no sound in a vacuum. You can watch the following animation to learn more about the basics of sound. 


Acoustic levitation uses sound traveling through a fluid -- usually a gas -- to balance the force of gravity. On Earth, this can cause objects and materials to hover unsupported in the air. In space, it can hold objects steady so they don't move or drift.
The process relies on of the properties of sound waves, especially intense sound waves. We'll look at how sound waves become capable of lifting objects in the next section.


TELEPORTATION




SEND ANYTHING  FROM ONE PLACE TO OTHER WITHOUT TRAVELLING ANY DISTANCE



Ever since the wheel was invented more than 5,000 years ago, people have been inventing new ways to travel faster from one point to another. The chariot, bicycle, automobile, airplane and rocket have all been invented to decrease the amount of time we spend getting to our desired destinations. Yet each of these forms of transportation share the same flaw: They require us to cross a physical distance, which can take anywhere from minutes to many hours depending on the starting and ending points.






But what if there were a way to get you from your home to the supermarket without having to use your car, or from your backyard to the International Space Station without having to board a spacecraft? There are scientists working right now on such a method of travel, combining properties of telecommunications and transportation to achieve a system called teleportation. In this article, you will learn about experiments that have actually achieved teleportation with photons, and how we might be able to use teleportation to travel anywhere, at anytime.
Teleportation involves dematerializing an object at one point, and sending the details of that object's precise atomic configuration to another location, where it will be reconstructed. What this means is that time and space could be eliminated from travel -- we could be transported to any location instantly, without actually crossing a physical distance.
Many of us were introduced to the idea of teleportation, and other futuristic technologies, by the short-livedStar Trek television series (1966-69) based on tales written by Gene Roddenberry. Viewers watched in amazement as Captain Kirk, Spock, Dr. McCoy and others beamed down to the planets they encountered on their journeys through the universe.
In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed thatquantum teleportation was possible, but only if the original object being teleported was destroyed. This revelation, first announced by Bennett at an annual meeting of the American Physical Society in March 1993, was followed by a report on his findings in the March 29, 1993 issue of Physical Review Letters. Since that time, experiments using photons have proven that quantum teleportation is in fact possible.







Teleportation: Recent Experiments

In 1998, physicists at the California Institute of Technology (Caltech), along with two European groups, turned the IBM ideas into reality by successfully teleporting a photon, a particle of energy that carries light. The Caltech group was able to read the atomic structure of a photon, send this information across 3.28 feet (about 1 meter) of coaxial cable and create a replica of the photon. As predicted, the original photon no longer existed once the replica was made.
In performing the experiment, the Caltech group was able to get around the Heisenberg Uncertainty Principle, the main barrier for teleportation of objects larger than a photon. This principle states that you cannot simultaneously know the location and the speed of a particle. But if you can't know the position of a particle, then how can you teleport it? In order to teleport a photon without violating the Heisenberg Principle, the Caltech physicists used a phenomenon known as entanglement. In entanglement, at least three photons are needed to achieve quantum teleportation:
Photon A: The photon to be teleported
Photon B: The transporting photon
Photon C: The photon that is entangled with photon B
If researchers tried to look too closely at photon A without entanglement, they would bump it, and thereby change it. By entangling photons B and C, researchers can extract some information about photon A, and the remaining information would be passed on to B by way of entanglement, and then on to photon C. When researchers apply the information from photon A to photon C, they can create an exact replica of photon A. However, photon A no longer exists as it did before the information was sent to photon C.
In other words, when Captain Kirk beams down to an alien planet, an analysis of his atomic structure is passed through the transporter room to his desired location, where a replica of Kirk is created and the original is destroyed.
In 2002, researchers at the Australian National University successfully teleported a laser beam.
The most recent successful teleportation experiment took place on October 4, 2006 at the Niels Bohr Institute in Copenhagen, Denmark. Dr. Eugene Polzik and his team teleported information stored in a laser beam into a cloud of atoms. According to Polzik, "It is one step further because for the first time it involves teleportation between light and matter, two different objects. One is the carrier of information and the other one is the storage medium" [CBC]. The information was teleported about 1.6 feet (half a meter).
Quantum teleportation holds promise for quantum computing. These experiments are important in developing networks that can distribute quantum information. Professor Samuel Braunstein, of the University of Wales, Bangor, called such a network a "quantum Internet." This technology may be used one day to build a quantum computer that has data transmission rates many times faster than today's most powerful computers.





We are years away from the development of a teleportation machine like the transporter room on Star Trek's Enterprise spaceship. The laws of physics may even make it impossible to create a transporter that enables a person to be sent instantaneously to another location, which would require travel at the speed of light.
For a person to be transported, a machine would have to be built that can pinpoint and analyze all of the 1028atoms that make up the human body. That's more than a trillion trillion atoms. This machine would then have to send this information to another location, where the person's body would be reconstructed with exact precision. Molecules couldn't be even a millimeter out of place, lest the person arrive with some severe neurological or physiological defect.

In the Star Trek episodes, and the spin-off series that followed it, teleportation was performed by a machine called a transporter. This was basically a platform that the characters stood on, while Scotty adjusted switches on the transporter room control boards. The transporter machine then locked onto each atom of each person on the platform, and used a transporter carrier wave to transmit those molecules to wherever the crew wanted to go. Viewers watching at home witnessed Captain Kirk and his crew dissolving into a shiny glitter before disappearing, rematerializing instantly on some distant planet.
If such a machine were possible, it's unlikely that the person being transported would actually be "transported." It would work more like a fax machine -- a duplicate of the person would be made at the receiving end, but with much greater precision than a fax machine. But what would happen to the original? One theory suggests that teleportation would combine genetic cloning with digitization.
In this biodigital cloning, tele-travelers would have to die, in a sense. Their original mind and body would no longer exist. Instead, their atomic structure would be recreated in another location, and digitization would recreate the travelers' memories, emotions, hopes and dreams. So the travelers would still exist, but they would do so in a new body, of the same atomic structure as the original body, programmed with the same information.
But like all technologies, scientists are sure to continue to improve upon the ideas of teleportation, to the point that we may one day be able to avoid such harsh methods. One day, one of your descendents could finish up a work day at a space office above some far away planet in a galaxy many light years from Earth, tell his or her wristwatch that it's time to beam home for dinner on planet X below and sit down at the dinner table as soon as the words leave his mouth.
For more information on teleportation and related topics,