Thursday, May 4, 2017

Latest Technology Inventions


The latest technology invention in environmental pollution is a tower that cleans outdoor air.
The Tower is a seven-metre (23 feet) high structure that removes ultra-fine particles from the air using a patented ion-technology developed by scientists at Delft University of Technology.
According to the World Health Organization, air pollution causes the greatest environmental threat to our health.
Air pollution causes respiratory and cardiovascular disease and accounts for over 7 million premature deaths every year - and that death toll is rising at an alarming rate.




In California, where residents suffer from the worst health impacts of dirty air in the United States, air pollution causes premature death for 53,000 residents every year.
In London, England, dirty air accounts for one out of every twelve deaths.
In Delhi, India, the average life expectancy is shortened by 6.3 years due to air pollution.

China has the worst air in the world. Beijing recently recorded pollution levels that were 17 times greater than the acceptable levels recommended by the World Health Organization.
Air pollution causes 1.6 million deaths every year in China - approximately 17% of all deaths.
For most countries, the deadliest form of air pollution is a fine particle known as "PM 2.5" (particulate matter 2.5). It is so named because it is a fine particle that is only 2.5 micrometers in diameter. Unlike larger air-borne particles that settle to the ground, PM 2.5 particles can float in the air for weeks.
When you breath these particles into your lungs , they penetrate your lung tissue and get absorbed unfiltered into your blood stream - causing damage to your body.

The problem with current air pollution control systems is that they reduce but do not eliminate pollution.
Dutch innovator Daan Roosegaarde , in collaboration with ENS Technology and the Delft University of Technology, developed large scalable towers that remove pollution emitted into the air.
This technology was originally developed to remove MRSA bacteria (a type of bacteria resistant to antibiotics) from dust particles. The bacteria would spread from human to human by traveling in the air on dust. The air ionizer prevented the bacteria from spreading in this way.
Roosegaarde's Tower cleans 30,000 cubic meters of air per hour without using ozone and uses about 1,400 Watts of electricity - less than a desk-top air purifier.
Air from the area surrounding the Tower is drawn into the structure. All airborne particles receive an electric charge.
The charged particles are captured and accumulate on large collector plates that have an opposite electric charge.
The clean air is then blown from the Tower back into the environment.

"Basically, it's like when you have a plastic balloon, and you polish it with your hand, it becomes static, electrically charged, and it attracts your hair," explains Roosegaarde.
The invention won the German Design Award for Excellent Product Design awarded by the German Ministry for Economics and Technology.
The Tower is currently being tested in Beijing by the Chinese Ministry of Environmental Protection.
“We're working now on the calculation: how many towers do we actually need to place in a city like Beijing. It shouldn't be thousands of towers, it should be hundreds. We can make larger versions as well, the size of buildings,” says Roosegaarde.

Cloud Computing

Analysts predict that the latest technology inventions in cloud computing will significantly influence how we use our computers and mobile devices.
Cloud computing is where tasks and file storage on your computer are performed and stored elsewhere.
By using an internet connection you can connect to a service that has the architecture, infrastructure and software to manage any task or storage requirement at less cost.
The advantages of cloud computing is that it eliminates the difficulty and expense of maintaining, upgrading and scaling your own computer hardware and software while increasing efficiency, speed and resources.
Your computer's processing speed, memory capacity, software applications and maintenance requirements are minimized.
You could store and access any size or type of file, play games, use or develop applications, render videos, word process, make scientific calculations, or anything you want, by simply using a smart phone.
As a comparison, let's say you had to generate your own electricity. You would need to maintain, upgrade and scale these resources as required to meet your demands. This would be expensive and time consuming.
Cloud computing could be compared to how a utility provides electricity. It has the architecture, infrastructure, applications, expertise and resources to generate this service for you. You just connect to their grid.

Detecting walking speed with wireless signals


We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of research suggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.
Unfortunately, it’s hard to accurately monitor walking speed in a way that’s both continuous and unobtrusive. Professor Dina Katabi’s group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been working on the problem, and believes that the answer is to go wireless.
In a new paper, the team presents “WiGait,” a device that can measure the walking speed of multiple people with 95 to 99 percent accuracy using wireless signals.
The size of a small painting, the device can be placed on the wall of a person’s house and its signals emit roughly one-hundredth the amount of radiation of a standard cellphone. It builds on Katabi’s previous work on WiTrack, which analyzes wireless signals reflected off people’s bodies to measure a range of behaviors from breathing and falling to specific emotions
“By using in-home sensors, we can see trends in how walking speed changes over longer periods of time,” says lead author and PhD student Chen-Yu Hsu. “This can provide insight into whether someone should adjust their health regimen, whether that’s doing physical therapy or altering their medications.”
WiGait is also 85 to 99 percent accurate at measuring a person’s stride length, which could allow researchers to better understand conditions like Parkinson’s disease that are characterized by reduced step size.
Hsu and Katabi developed WiGait with CSAIL PhD student Zachary Kabelac and master’s student Rumen Hristov, alongside undergraduate Yuchen Liu from the Hong Kong University of Science and Technology, and Assistant Professor Christine Liu from the Boston University School of Medicine. The team will present their paper in May at ACM’s CHI Conference on Human Factors in Computing Systems in Colorado.  
How it works
Today, walking speed is measured by physical therapists or clinicians using a stopwatch. Wearables like FitBit can only roughly estimate speed based on step count, and GPS-enabled smartphones are similarly inaccurate and can’t work indoors. Cameras are intrusive and can only monitor one room. VICON motion tracking is the only method that’s comparably accurate to WiGate, but it is not widely available enough to be practical for monitoring day-to-day health changes.
Meanwhile, WiGait measures walking speed with a high level of granularity, without requiring that the person wear or carry a sensor. It does so by analyzing the surrounding wireless signals and their reflections off a person’s body. The CSAIL team’s algorithms can also distinguish walking from other movements, such as cleaning the kitchen or brushing one's teeth.
Katabi says the device could help reveal a wealth of important health information, particularly for the elderly. A change in walking speed, for example, could mean that the person has suffered an injury or is at an increased risk of falling. The system's feedback could even help the person determine if they should move to a different environment such as an assisted-living home.
“Many avoidable hospitalizations are related to issues like falls, congestive heart disease, or chronic obstructive pulmonary disease, which have all been shown to be correlated to gait speed,” Katabi says. “Reducing the number of hospitalizations, even by a small amount, could vastly improve health care costs.”
The team developed WiGait to be more privacy-minded than cameras, showing you as nothing more than a moving dot on a screen. In the future they hope to train it on people with walking impairments from Parkinson’s, Alzheimer’s or multiple sclerosis, to help physicians accurately track disease progression and adjust medications.
“The true novelty of this device is that it can map major metrics of health and behavior without any active engagement from the user, which is especially helpful for the cognitively impaired,” says Ipsit Vahia, a geriatric clinician at McLean Hospital and Harvard Medical School, who was not involved in the research. “Gait speed is a proxy indicator of many clinically important conditions, and down the line this could extend to measuring sleep patterns, respiratory rates, and other vital human behaviors.”

Ballyhooed artificial-intelligence technique known as “deep learning” revives 70-year-old idea.


In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”
Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.
Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.
The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.
“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”
Weighty matters
Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.
Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.
To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.
When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.
Minds and machines
The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.
Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.
The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.
Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.
“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.
“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”
Periodicity
By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.
But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.
In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.
The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.
Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.
Under the hood
The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.
The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.
There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

Discover 7 Ways to Create a Sustainable Passive Income

Discover 7 Ways to Create a Sustainable,  Passive Income for Life So You Can Work Less