Showing posts with label Robot. Show all posts
Showing posts with label Robot. Show all posts

Tuesday, July 29, 2008

University of Southern California : Robot playmates may help children with autism

USC studies document that kids with ASD actively interact with robots; creation of therapy tools is next step



Feil-Seifer, Matarić and assistants: "a doorway into the attention " of ASD children.


Papers delivered at three conferences in the US and Europe this summer report on new research at the University of Southern California Viterbi School of Engineering studying interactions of children with Autism Spectrum Disorders (ASD) with bubble-blowing robots.

The preliminary studies, by Professor Maja Matarić and PhD student David Feil-Seifer of the USC Interaction Laboratory, confirm what has been widely reported anecdotally: that ASD children in many cases interact more easily with mechanical devices than with humans.

Matarić and Feil-Seifer, both specialists in Socially Assisted Robotics (SAR), are now engaged in further research to confirm their findings, and to develop a robot "control architecture" which will tailor robot interactions to the specific needs of ASD children to help therapists treating their condition.

The initial study, reported in the June Conference on Interaction Design for Children with Special Needs in Chicago, tested whether interaction as opposed to simple passive observation was going on between ASD children and a colorful bubble-blowing wheeled robot.



Bubblebot: When set in "contingent " behavior mode, children's actions can control its behavior. A video can be viewed at http://www.youtube.com/watch?v=VRNWRlSmiA0


The robot had two settings. In one, it carried on its rolling and bubble blowing on its own internal schedule, regardless of the behavior of the child. In the other, "when the child pushes a button, then the bubbles blow," in the words of the Chicago presentation.

The study watched the children and observed differences. "We found that the behavior of the robot affects the social behavior of a child (both human-human interaction and human-robot interaction): social behavior with a contingent robot was greater than with a random robot.

"Total speech went from 39.4 to 48.4 utterances, robot speech from 6.2 to 6.6 utterances, and parent speech from 17.8 to 33 utterances. Total robot interactions went from 43.42 to 55.31, with button pushes increasing from 14.69 to 21.87 and other robot interactions going from 24.11 to 28. Total directed interactions (interactions that were clearly directed at either the robot or the parent) went up from 62.75 to 89.47. Generally, when the robot was acting contingently, the child was more sociable."

While only four children were part of the initial study, Feil-Seifer and Matarić believe the work clearly demonstrates the ability of robots to actively engage with ASD children - "offer a doorway into their attention," Matarić says. A much more extensive follow-up with more subjects is already in progress, in collaboration with Los Angeles Childrens Hospital and the Autism Genetic Resource Exchange.

Two other presentations by Feil-Seifer and Matarić, at the 11th International Symposium on Experimental Robotics 2008 in Athens, Greece in July, 2008, and at the IEEE Proceedings of the International Workshop on Robot and Human Interactive Communication, discuss these results in more detail, particularly in regard to the "Behavior-Based Behavior Intervention Architecture" (B3IA) they have developed to make the robots flexible and useful tools help ASD children.

This architecture (the system, including robotic and non-robotic components, plus provisions for recording and analyzing the proceedings) is based on an ASD therapy format called DIR/Floortime, in which a therapist shares floor with various toys used to try to engage the child.

Matarić and Feil-Seifer, in collaboration with Dr. Marian Williams from Childrens Hospital Los Angeles and Shri Narayanan from Viterbi School's Department of Electrical Engineering, are replacing toys with robots, both the rolling robots with horns and bubble blowers used in the initial results, and humanoid robots capable of smiles and other expression.

Behind the scenes, the architecture also includes an overhead video view that analyzes, documents, and stores every interaction, and a control system for the therapist operator that allows for switching between scenarios for interaction with the child, to concentrate on what works, and change what works to make it work better -- while still retaining a standard record-keeping and monitoring system used in ASD therapy.

Matarić has for years been working in the field of socially assisted robots to help a variety of other user populations, including patients with Alzheimer's Disease and stroke victims receiving help in rehabilitation. She notes that ASD is now at "epidemic" proportions in the United States.

"I am gratified by these preliminary results," she said. "I believe that Socially Assistive Robotics has a part to play in helping families, both the affected children and their parents and siblings."

###

While not authors of the studies, Dr. Clara Lajonchere of Childrens Hospital of Los Angeles and Dr. Michele Kipke of the Autism Genetic Resource Exchange played key roles in the work and will be continuing to collaborate with the USC roboticists.

The research was funded by the USC Provost's Center for Interdisciplinary Research, the Okawa Foundation, and an NSF Computing Research Infrastructure Grant.

Copies of the conference presentations are available in PDF form here:

David J. Feil-Seifer and Maja J. Matarić, "Robot-assisted therapy for children with Autism Spectrum Disorders," Refereed Workshop Conference on Interaction Design for Children: Children with Special Needs, pp. 49-52, Chicago, Il, Jun 2008.
http://cres.usc.edu/pubdb_html/files_upload/588.pdf

David J. Feil-Seifer and Maja J. Matarić, "Toward Socially Assistive Robotics For Augmenting Interventions For Children With Autism Spectrum Disorders," 11th International Symposium on Experimental Robotics 2008, Athens, Greece, Jul 2008.
http://cres.usc.edu/pubdb_html/files_upload/589.pdf

David J. Feil-Seifer and Maja J. Matarić, "B3IA: An architecture for autonomous robot-assisted behavior intervention for children with Autism Spectrum Disorders," IEEE Proceedings of the International Workshop on Robot and Human Interactive Communication, Munich, Germany, Aug 2008.
http://cres.usc.edu/pubdb_html/files_upload/549.pdf

New York- Presbyterian Hospital/Columbia University Medical Center research into robotic surgery for kidney cancer

New research helps optimize benefits of robotic approach

NEW YORK (July 28, 2008) -- Clinical research at NewYork-Presbyterian Hospital/Columbia University Medical Center is helping bring the advantages of robotic surgery, including reduced pain and quicker recovery, to kidney cancer patients.

Using the latest-generation da Vinci® S Surgical System by Intuitive Surgical, surgeons operate through several small incisions in the abdomen. Surgeons then remove only the cancerous tissue from the kidney, and repair the remaining normal kidney tissue, all using robotic arms guided by video taken by a camera controlled by a separate robotic arm.

The stereoscopic view provides enhanced visibility, and the nimble robotic mechanism makes for easy cutting and suturing, according to Drs. Ketan Badani and Jaime Landman, who make up the robotic kidney surgery team at NewYork-Presbyterian/Columbia.

"With robotics, there is a much greater opportunity for complex reconstruction of the kidney than can typically be achieved with a standard laparoscopic approach," notes Dr. Badani, director of robotic urologic surgery at NewYork-Presbyterian Hospital/Columbia University Medical Center and assistant professor of urology at Columbia University College of Physicians and Surgeons.

"This means that, hopefully, we will have an opportunity not only to reduce the need for kidney cancer patients to require a kidney transplant, but also reduce their need for dialysis later in life," adds Dr. Landman, director of minimally invasive urologic surgery at NewYork-Presbyterian Hospital/Columbia University Medical Center and associate professor of urology at Columbia University College of Physicians and Surgeons.

In a recent issue of the Journal of Endourology, Dr. Badani described a new technique for port placement -- the location of the small incision through which the robot operates -- that maximizes range of motion for the robot's camera arm and working arm. The approach was shown to be successful in more than 50 cases, and has been adopted for use by medical centers worldwide.

Robotic surgery, most widely used for prostate cancer surgery, is beginning to be more widely available for other conditions. In addition to kidney cancer, Dr. Badani and Dr. Mitchell Benson (George F. Cahill Professor and Chairman of the Department of Urology at Columbia University College of Physicians and Surgeons and urologist-in-chief at NewYork-Presbyterian Hospital/Columbia University Medical Center), have established robotic surgery for bladder cancer, and they cite work being undertaken in pelvic floor reconstruction and repair of vaginal wall prolapse.

###

Columbia University Medical Center

Columbia University Medical Center provides international leadership in basic, pre-clinical and clinical research, in medical and health sciences education, and in patient care. The medical center trains future leaders and includes the dedicated work of many physicians, scientists, public health professionals, dentists, and nurses at the College of Physicians & Surgeons, the Mailman School of Public Health, the College of Dental Medicine, the School of Nursing, the biomedical departments of the Graduate School of Arts and Sciences, and allied research centers and institutions. Established in 1767, Columbia's College of Physicians & Surgeons was the first institution in the country to grant the M.D. degree and is now among the most selective medical schools in the country. Columbia University Medical Center is home to the largest medical research enterprise in New York City and state and one of the largest in the United States. For more information, please visit www.cumc.columbia.edu.

NewYork-Presbyterian Hospital

NewYork-Presbyterian Hospital, based in New York City, is the nation's largest not-for-profit, non-sectarian hospital, with 2,242 beds. The Hospital has nearly 2 million patient visits in a year, including more than 230,000 visits to its emergency departments -- more than any other area hospital. NewYork-Presbyterian provides state-of-the-art inpatient, ambulatory and preventive care in all areas of medicine at five major centers: NewYork-Presbyterian Hospital/Weill Cornell Medical Center, NewYork-Presbyterian Hospital/Columbia University Medical Center, Morgan Stanley Children's Hospital of NewYork-Presbyterian, NewYork-Presbyterian Hospital/Allen Pavilion and NewYork-Presbyterian Hospital/Westchester Division. One of the largest and most comprehensive health-care institutions in the world, the Hospital is committed to excellence in patient care, research, education and community service. It ranks sixth in U.S.News & World Report's guide to "America's Best Hospitals," ranks first on New York magazine's "Best Hospitals" survey, has the greatest number of physicians listed in New York magazine's "Best Doctors" issue, and is included among Solucient's top 15 major teaching hospitals. The Hospital's mortality rates are among the lowest for heart attack and heart failure in the country, according to a 2007 U.S. Department of Health and Human Services (HHS) report card. The Hospital has academic affiliations with two of the nation's leading medical colleges: Weill Cornell Medical College and Columbia University College of Physicians and Surgeons. For more information, visit www.nyp.org.

Tuesday, July 15, 2008

Do we think that machines can think?

When our PC goes on strike again we tend to curse it as if it was a human. The question of why and under what circumstances we attribute human-like properties to machines and how such processes manifest on a cortical level was investigated in a project led by Dr. Sören Krach and Prof. Tilo Kircher from the RWTH Aachen University (Clinic for Psychiatry and Psychotherapy) in cooperation with the Department of "Social Robotics" (Bielefeld University) and the Neuroimage Nord (Hamburg). The findings are published July 9 in the online, open-access journal PLoS ONE.

Almost daily, new accomplishments in the field of human robotics are presented in the media. Constructions of increasingly elaborate and versatile humanoid robots are reported and thus human-robot interactions accumulate in daily life. However, the question of how humans perceive these "machines" and attribute capabilities and "mental qualities" to them remains largely undiscovered.

In the fMRI study, reported in PLoS ONE, Krach and colleagues investigated how the increase of human-likeness of interaction partners modulates the participants' brain activity. In this study, participants were playing an easy computer game (the prisoners' dilemma game) against four different game partners: a regular computer notebook, a functionally designed Lego-robot, the anthropomorphic robot BARTHOC Jr. and a human. All game partners played an absolutely similar sequence, which was not, however, revealed to the participants.

The results clearly demonstrated that neural activity in the medial prefrontal cortex as well as in the right temporo-parietal junction linearly increased with the degree of "human-likeness" of interaction partners, i.e. the more the respective game partners exhibited human-like features, the more the participants engaged cortical regions associated with mental state attribution/mentalizing.

Further, in a debriefing questionnaire, participants stated having increasingly enjoyed the interactions most when their respective interaction partners displayed the most human features and accordingly evaluated their opponents as being more intelligent.

This study is the first ever to investigate the neuronal basics of direct human-robot interaction on a higher cognitive level such as mentalizing. Thus, the researchers expect the results of the study to impact long-lasting psychological and philosophical debates regarding human-machine interactions and especially the question of what causes humans to be perceived as human.

New Study of Public Library of Science Find Why musicians make us weep and computers don't?

Music can soothe the savage breast much better if played by musicians rather than clever computers, according to a new University of Sussex-led study published in the online, open-access journal PLoS ONE.

Neuroscientists looked at the brain's response to piano sonatas played either by a computer or a musician and found that, while the computerised music elicited an emotional response – particularly to unexpected chord changes - it was not as strong as listening to the same piece played by a professional pianist.

Senior research fellow in psychology Dr Stefan Koelsch, who carried out the study with colleagues at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, played excerpts from classical piano sonatas to twenty non-musicians and recorded electric brain responses and skin conductance responses (which vary with sweat production as a result of an emotional response).

Although the participants did not play instruments and considered themselves unmusical, their brains showed clear electric activity in response to musical changes (unexpected chords and changes in tonal key), which indicated that the brain was understanding the "musical grammar". This response was enhanced, however, when the sonatas were played by musicians rather than a computer.

Dr Koelsch said: "It was interesting for us that the emotional reactions to the unexpected chords were stronger when played with musical expression. This shows us how musicians can enhance the emotional response to particular chords due to their performance, and it shows us how our brains react to the performance of other individuals."

The study also revealed that the brain was more likely to look for musical meaning when the music was played by a pianist.

"This is similar to the response we see when the brain is responding to language and working out what the words mean," says Dr Koelsch. "Our results suggest that musicians actually tell us something when they play The brain responses show that when a pianist plays a piece with emotional expression, the piece is actually perceived as meaningful by listeners, even if they have not received any formal musical training."

Tuesday, July 1, 2008

University Hospital Heidelberg : Insights into Micromillimeters

New high-tech imaging center “TIGA” at the University of Heidelberg / Robot “NanoZoomer” shows high-resolution images of cells and tissue

“TIGA,” the new high-tech imaging center at the University of Heidelberg founded in cooperation with the Japanese company Hamamatsu, provides deep insights: a high-tech robot makes it possible for the first time to automatically reproduce and evaluate tissue slices only micromillimeters thick – an important aid for researchers in understanding cancer or in following in detail the effect of treatment on cells and tissue.

The Hamamatsu Tissue Imaging and Analysis (TIGA) Center is a cooperative effort between the Institutes of Pathology and of Medical Biometry and Informatics at the University of Heidelberg and the Japanese company Hamamatsu Photonics. In addition, it belongs to BIOQUANT, the research center for quantitative biology at the University of Heidelberg. At its core is the imaging robot “NanoZoomer” from Hamamatsu Photonics: the robot scans the tissue slices and displays them on the monitor for researchers at ultra high resolution and in various planes.

“Technically, this has brought the fully automatic evaluation of tissue changes and approaches for new therapy within our grasp,” states Professor Dr. Peter Schirmacher, Director of the Institute for Pathology at Heidelberg University Hospital. This would represent a new milestone in pathology.

Detailed images help understand diseases

Which proteins are formed to a greater degree in cancer cells? How is tumor tissue changed during radiation treatment? Thanks to the NanoZoomer’s high-resolution images and special evaluation programs, researchers in the future will be able to evaluate tissue and cell samples more quickly and accurately and gain important new insights for therapy tailored to the individual patient, for example for breast cancer.

In the future, the robot will be able to determine changes in cells and tissue fully automatically. “The NanoZoomer represents a quantum leap in tissue research,” says Dr. Niels Grabe of the Institute for Medical Biometry and Informatics and research director at the TIGA Center.

Virtual Tissue is modeled from data

The medical IT specialists use the NanoZoomer to evaluate huge quantities of data from tissues for their research. For example, Dr. Niels Grabe and his team used data to model virtual skin tissue. “On a computer model of human skin tissue we can test whether certain substances are toxic, for example,“ explains Dr. Grabe. “In the future, this could make it easier to develop potential new drugs.”

Hamamatsu recognized the many possible applications early on, so that new technological markets have now been opened up for them. “We are happy to have found two partners in the Heidelberg Institute of Pathology and the Institute of Medical Biometry and Informatics with whom we can develop concrete clinical uses and new applications for research,” said Hideo Hiruma, Managing Director of Hamamatsu Photonics, Japan.

Contact:

Dr. Niels Grabe

Research Director at the TIGA center

Tel.: +49 6221 / 56 5143

E-Mail: niels.grabe@med.uni-heidelberg.de

Professor Dr. Peter Schirmacher

Director of the Institute for Pathology

at Heidelberg University Hospital

Phone: +49 6221 / 56 2601

E-Mail: peter.schirmacher@med.uni-heidelberg.de

[Startet den Datei-Downloadpicture in high resolution]

For the first time a high-tech robot automatically reproduces and evaluates tissue slices only micromillimeters thick (right: Dr. Niels Grabe, research director at the TIGA center).

Photo: Rothe

Hamamatsu Photonics, Germany and Japan:

Hamamatsu Germany is the German subsidiary of Hamamatsu Photonics K.K. (Japan), a leading manufacturer of devices for the generation and measurement of infrared, visible, and ultraviolet light. These devices include photodiodes, photomultiplier tubes, scientific light sources, infrared detectors, photoconductive cells, image sensors and integrated measurement systems for science and industry. The parent company is dedicated to the advancement of photonics through extensive research. This corporate philosophy results in state-of-the-art products which are used throughout the world in scientific, industrial, and commercial applications.

Institute of Pathology, University Heidelberg:

The Institute of Pathology at the University Heidelberg contributes to patient care, teaching, advanced training, quality management and research. Key task is the diagnostic evaluation of tissues (histology) and cell preparations (cytology). The Institute analyses more than 60.000 samples from operative and conservative medicine which are an elementary component of clinical diagnostics and therapy planning. The Institute is consulting in many areas, for example tumor diagnostics.

Institute of Medical Biometry and Informatics, University Heidelberg:

The Institute of Medical Biometry and Informatics at the University Heidelberg contributes to teaching, advanced training and clinical research. Biometry is concerned with the methodology and realization of therapeutic-, diagnostic- and meta studies. Research subjects of medical informatics includes bioinformatics/systems biology, knowledge based diagnosis and therapy, the management of health data, as well as medical image processing and pattern recognition. In collaboration with the University Heilbronn, the institute is conducting Germany’s eldest curriculum on medical informatics.

Requests by journalists:

Dr. Annette Tuffs

Head of Public Relations and Press Department

University Hospital of Heidelberg and

Medical Faculty of Heidelberg

Im Neuenheimer Feld 672

D-69120 Heidelberg

Germany

phone: +49 6221 / 56 45 36

fax: +49 6221 / 56 45 44

e-mail: annette.tuffs(at)med.uni-heidelberg.de

41 times viewed

Saturday, June 28, 2008

Army research : Mobility


Semi-Autonomous Robotics


Semi-Autonomous Robotics for FCS:

This Army Technology Objective will develop autonomous mobility technology critical for Army Future Force systems, including unmanned elements of Future Combat Systems (FCS), Land Warrior (LW), and crew aids for manned systems. The principal focus is toward robotic elements that maneuver in high hazard environments forward of manned systems. It will combine robotic functionality with human capabilities to provide flexible, semi-autonomous control modes for FCS elements, and will provide future land combat forces with significant new operational capabilities permitting paradigm shifts in the conduct of ground warfare, thus enabling significantly greater survivability and deployability. The effort supports and complements the joint Tank and Automotive Research, Development and Engineering Center/U.S. Army Research Laboratory (TARDEC/ARL) Robotic Follower Advanced Technology Demonstration, Crew-Integration and Automation Testbed Advanced Technology Demonstration, and Aerial Reconnaissance Vehicle (ARV) Robotics Technology Army Technology Objective. A key element of this Army Technology Objective is the research being conducted by the Robotics Collaborative Technology Alliance (CTA), which is a consortium of industrial and academic institutions working collaboratively with ARL and Army Research, Development & Engineering Centers (RDECs) to advance robotics technology. Technical efforts are focused upon the continuous advancement of perception for autonomous ground mobility; intelligent vehicle control and behaviors; and human supervision of unmanned ground systems, including the specialized sensor and network developments required to achieve robust semi-autonomous performance. Research is closely connected with experiments evaluating new technology in statistically rigorous field experiments that integrate ground truth with robot and operator performance.
Robotics Basic Research Collaborative Technology Alliance:
This project conducts basic research in key scientific areas that will expand the capabilities of intelligent mobile robotic systems for military applications. Research will be conducted in perception, including the exploration of sensor phenomenology and the maturation of basic machine vision algorithms, intelligent control, including maturation of artificial intelligence techniques for robot behaviors that will permit adaptation to unknown and/or dynamic environments, and broadening understanding of the interaction of humans with machines. The program will conduct both analytic and experimental studies.


Advanced Propulsion and Transmission Technologies


Advanced Propulsion and Transmission Fundamentals:

This basic technology program is aimed at developing a fundamental understanding of new, advanced aerodynamic engine component concepts; advanced mechanical component concepts to enable major advances in rotorcraft mechanical power transmission; and high temperature materials and structures to enable substantial increases in efficiency, power density, and affordability of small gas turbine engines.
Small Heavy Fuel Engine:
ARL is providing engine component level technology and high temperature materials and structures concepts to enable a small, lightweight, efficient heavy fuel propulsion capability supporting A160 and other class 4 Unmanned Aerial Vehicles (UAVs). Our technologies directly contribute to the target goals of reducing specific fuel consumption by 20%, increasing horsepower-to-weight ratio by 50%, and reducing the operating and support cost by 35%. It also provides a technology base/tools for application to Future Force ground vehicle and manned/unmanned rotorcraft engine development.
Drive System Technology:
ARL contributes component level power transmission technology and advanced power transmission concepts to enable at least a 40% increase in the vehicle drive-system power-to-weight ratio without sacrificing life, reliability, or acoustic characteristics. These technologies are applicable to future manned and unmanned aircraft of the Future Force.

Vehicle Structural Mechanics and Dynamics Technologies


Vehicle Structural Mechanics and Dynamics Fundamentals:

This basic technology program is aimed at developing a fundamental understanding of structural mechanics and aeromechanics science and technology to enable revolutionary improvements in vehicle performance, reliability, weight, and cost to achieve Future Force operational capabilities.
Survivable, Affordable, Repairable, Airframe Program (SARAP): ARL will develop and validate structural analysis and design tools that address the weight, manufacturing, production, and operations requirements for structural concepts that are applicable for Department of Defense (DoD) Future Transport Rotorcraft (FTR), legacy vehicle upgrades, and UAVs. The focus of this technology development will support the characterization and validation of static and fatigue strength, damage tolerance, crashworthiness, and inspectibility/repairability characteristics of composite and hybrid vehicle structures.
Advanced Rotor Technologies:
ARL is providing advanced concepts and improved rotorcraft loads analysis models specifically for addressing the ultimate goal of developing a "no-swashplate" rotor concept. The payoff for a "no-swashplate" rotor will be reduced manufacturing, operating, and support costs through integral blade control concepts while eliminating conventional rotor hardware. The advances made by ARL in “on-blade” active twist will be extended from vibration control technology to full authority flight control capability.

Thursday, June 19, 2008

mini robot : Sandia National Laboratories.Department of Energy’s Research world’s smallest mini-robot

MINI-ROBOT RESEARCH — Sandia National Laboratories researcher Doug Adkins takes a close-up view of the mini-robots he and Ed Heller are developing. At 1/4 cubic inch and weighing less than an ounce, they are possibly the smallest autonomous untethered robots ever created. ------------------------------------------ What may be world’s smallest mini-robot being developed at Sandia Can ‘turn on a dime and park on a nickel’ ALBUQUERQUE, N.M. — What may be the world’s smallest robot — it “turns on a dime and parks on a nickel” — is being developed by researchers at the Department of Energy’s Sandia National Laboratories. At 1/4 cubic inch and weighing less than an ounce, it is possibly the smallest autonomous untethered robot ever created. Powered by three watch batteries, it rides on track wheels and consists of an 8K ROM processor, temperature sensor, and two motors that drive the wheels. Enhancements being considered include a miniature camera, microphone, communication device, and chemical micro-sensor. “This could be the robot of the future,” says Ed Heller, one of the project’s researchers. “It may eventually be capable of performing difficult tasks that are done with much larger robots today — such as locating and disabling land mines or detecting chemical and biological weapons.” He says it could, for example, scramble through pipes or prowl around buildings looking for chemical plumes or human movement. The robots may be capable of relaying information to a manned station and communicating with each other. They will be able to work together in swarms, like insects. The miniature robots will be able to go into locations too small for their larger relatives.
---------------------------------------------- MINI-ROBOT “turns on a dime and parks on a nickel.” ------------------------------------------
The mini-robot has already maneuvered its way through a field of dimes and nickels and travels at about 20 inches a minute. It can sit easily on a nickel. The newest robot miniaturization research supports Laboratories Directed Research and Development (LDRD) work started in Sandia’s Intelligent Systems Sensors & Controls Department. In 1996 the department unveiled a Mini Autonomous Robot Vehicle (MARV), a one-cubic-inch robot that contained all the necessary power, sensors, computers, and controls on board. It was made primarily from commercial parts using conventional machining techniques. Over the next several years the department improved the original MARV. The robots’ bodies were made of printed circuit boards, and each had an obstacle detector sensor, radio, temperature sensor, and batteries. At 1.6 x 0.75 x 0.71 inches, they were still larger than was desirable. Sandia roboticist Ray Byrne, who was involved in the LDRD efforts, says about three years ago Intelligent Systems and Robotics Center teamed with Sandia’s Sensor Technologies Department to further miniaturize the robots. They sought out the department’s help because of its expertise in building sensors and other devices on miniature scales. By trying new techniques at packaging electronics, wheel design, and body material, the new team of researchers shrunk the robots to 1/4 cubic inch. Heller, who developed the device’s microelectronics, says one significant innovation that permitted the shrinkage was the use of commercially available unpackaged electronics parts. “Previous small robots consisted of packaged electronic parts that were more bulky and took up valuable space. By eliminating the packaging and using electronic components in die form, we reduced the size of the robots electronics considerably,” Heller says. “This was a first major step.” The unpackaged parts are assembled onto a simple multi-chip module on a glass substrate. The assembly was done at Sandia’s Compound Semiconductor Research Laboratory. Doug Adkins, who developed the mechanical design for the new mini-robot, says the researchers further reduced its size by using a new rapid prototyping technique to form the device’s body. Called stereolithography, the material-building method lays down a very thin polymer deposit that is cured by a laser. The material, which “grows” as each layer is added, is lightweight, strong, and can be formed in complex shapes. The robot bodies have cavities for the batteries, the electronics-embedded glass substrate, axles, tiny motors, switches, and other parts. Adkins also redesigned the wheel structure of the device. Earlier models had standard wheels. However, the mobility was limited due to the small size of the wheels. “I thought of how tanks with their track wheels can maneuver over many large objects and realized the mini-robots could benefit from the same type of wheels,” Adkins says. With the addition of tracks, the robot can now move easily on carpet. The ultimate size of the miniature robots is primarily limited by the size of the power source — the three watch batteries. The body must be large enough to hold batteries to support power requirements of the robot. “Batteries — both the physical size and battery life — have been one of our biggest issues,” Heller says. “The batteries need to run longer and be smaller.” Over the next few years, with additional help from other Sandia groups, Heller and Adkins expect to add to the mini-robots either infrared or radio wireless two-way communication capability, as well as miniature video cameras, microphones, and chemical micro-sensors.
------------------------------
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major research and development responsibilities in national security, energy and environmental technologies, and economic competitiveness. Media contact: mailto:coburro@sandia.gov, (505) 844-0948 Technical contacts: mailto:dradkin@sandia.gov, (505) 844-0611 mailto:ejhelle@sandia.gov, 505-844-1798

World's Largest Robot Develop Australian Scientists

A team of Australian scientists is developing the world’s largest industrial robot - a massive beast 75 metres tall, weighing 3500 tonnes and able to devour 150 tonnes of rock in a single bite.
The giant robot is emerging from research designed to instal a computer “brain” in a dragline, the huge walking crane with a 100-metre-long boom, used to scoop up blasted rock in open-cut coal mines. A team from the Co-operative Research Centre for Mining Technology and Equipment (CMTE) and CSIRO Manufacturing Science and Technology, is devising a computerised system that will automate part of the operations of the dragline, improving productivity and sparing both operator and equipment strain. Their world-first achievement has attracted international scientific attention, and was recently featured by US space agency NASA in its “Cool Robot of the Week” web site. “A dragline picks up from 100 to 300 tonnes of fragmented rock with every scoop, then swings it round and delivers it to the spoil pile, then swings back again,” explains CSIRO researcher Dr Jonathan Roberts. “It also swings back and forth once a minute, so the smoother and more efficient you can make the operation, the less wear and tear on the machine, the more rock is moved and the less strain on the dragline’s human operator.” The aim of the project is to automate the parts of the operation where the dragline carries its full load from the pick-up point to the dump point, and then returns to collect a new bucketful. “We liken it to cruise-control in a motor vehicle. It will offer the operator the option of automating the repetitive parts of the process, which account for up to 80 per cent of operating time, allowing him to focus on the more challenging and skilled tasks.” It also has some resemblance to the autopilot in a passenger jet aircraft - especially as a dragline, at $60-100 million, costs about the same as a commercial jet. “The dragline is essentially 1950s technology. It’s fairly low-tech. What we’re doing is retrofitting it with a brain,” Dr Roberts explains. It has been estimated that increasing the productivity of a dragline by around 4 per cent, which may not sound all that much - would save the typical Australian coal mine $3 million a year, or $280 million for Australia as a whole. Any productivity improvements gained using the automated swing control system have yet to be quantified, but it is hoped that 4 per cent is possible. The system is being trialled on a dragline located at Tarong Coal’s Meandu Mine, near Kingaroy, in Queensland. Latest tests have demonstrated co-ordinated computer control of the hoist, drag and swing functions and placement of the 40 tonne bucket gently on the ground, ready for loading, with the computer moving the dragline’s pedals and levers. “We were able to move this particular machine, which weighs 3500 tonnes, just using a computer mouse!” Dr Roberts says. The dragline’s human operator “teaches” his robot partner the exact places where the coalface and spoil pile are using a joystick. The computer memorises them and then performs the operation more smoothly than its human mentor. In time, says Dr Roberts, they may be able to add a radar sensor to the boom to help the computer locate its targets. CMTE’s work in automating the dragline swing cycle is funded by a consortium consisting of: Australian Coal Association Research Program (ACARP), Rio Tinto, BHP Australia Coal Pty Ltd, CSIRO and CMTE. Valuable support has been provided by Bucyrus (Australia), Tritronics Pty Ltd and the staff of Meandu mine. More information: Dr Jonathan Roberts, CSIRO jmr@brb.dmt.csiro.au http://www.cmte.org.au/ http://www.cat.csiro.au/automation/dragline.html Source: Live Sceince

Subsea robot The world's biggest robot

According to a newspaper of Newcastle upon Tyne, UK, companies installing subsea cables for telecommunications companies and pipelines for the oil industry have now a new tool, the UT-1 Ultra Trencher which is the world's biggest subsea robot. This beauty weighs 60 tons (in the air) and has a length of 7.8 meters, a width of 7.8 meters and a height of 5.6 meters. In fact, it has the dimensions of a small house but is more expensive, carrying a price tag of about £10 millions. It can move at a speed of 2 to 3 knots under the sea. And it can trench pipelines with a 1-meter diameter in deep waters of up to 1,500 meters. But read more...
Here is what the UT-1 Ultra Trencher looks like. This huge subsea robot has been built by Soil Machine Dynamics (SMD), a Newcastle upon Tyne company which develops specialized remote controlled submersible robots (ROVs). The first UT-1 Ultra Trencher has been delivered to CTC Marine Projects, a subsea contractor, based in North East of England, UK, and a subsidiary of DeepOcean, a norwegian company focused on subsea services. This submarine robot will be permanently installed on one of their new vessels, the Volantis (PDF format, 6 pages, 764 KB). Here is a short excerpt from the Henderson article. "Weighing 50 tonnes and the size of small house, it is designed to bury largediameter oil and gas pipelines laid on the ocean floor. It does this by 'flying' down up to a mile deep below the surface using powerful propellers. It then lands over the pipeline and deploys a pair of 'jet swords' either side of the pipe which inject high pressure water to 'fluidise' the surface. Burying the pipelines protects them from fishing, shipwrecks and natural currents. This enables oil and gas to be safely transported from the offshore fields to land to provide secure energy supplies." For more information, you'll find the complete specifications of the UT-1 Ultra Trencher in this PDF datasheet (2 pages, 917 KB), from which the above illustration has been extracted. Here is a short excerpt. "The UT-1 Ultra Trencheris the world's most powerful jetting trencher, offering unparalleled flexibility in severe weather deployment and operation. With more than 2 megawatts of total power, the trencher delivers 1.5 megawatts of actual jetting energy to the cutting surface." Sources:
Tony Henderson, The Journal, Newcastle upon Tyne, UK,

via redOrbit, March 20, 2008; and various websites

Largest Robot : Germans Unveil Titan


German efficiency and determination have produced the world's largest and strongest robot, the KUKA KR 1000. It's full-on huge, with a height of just over 4 meters and a reach alone of 3.2 meters. The robot, aptly called Titan, has nine motors built into a dynamic yet rock-solid steel base. This kind of machine is obviously built with huge jobs in mind, such as building bridges and lifting train cars and things like that. Those nine motors give the robot the power of a mid-sized car. At a recent demonstration, the robot was able to transport 1000 kilos (454 pounds). Scientists have also successfully used the Titan to lift massive sections of concrete staircases.This is the kind of robot that will be used to build cars, actually, as well as perform multiple tasks every day in foundries. It also has a place in the Guinness Book of Records, as the largest and strongest robot yet invented.
Source : gizmag.com

mini robot :Dartmouth researche World's smallest robot


Dartmouth researchers built what they claim is the world's "smallest untethered, controllable robot." Built from micro-electromechanical systems (MEMS) fabricated using processes similar to the way integrated circuits are manufactured, the microbot is approximately as wide as a human hair and about 250 micrometers long. (one micrometer=1/1000 of a millimeter) From Dartmouth News: "It's tens of times smaller in length, and thousands of times smaller in mass than previous untethered microrobots that are controllable," says (researcher Bruce) Donald. "When we say 'controllable,' it means it's like a car; you can steer it anywhere on a flat surface, and drive it wherever you want to go. It doesn't drive on wheels, but crawls like a silicon inchworm, making tens of thousands of 10-nanometer steps every second. It turns by putting a silicon 'foot' out and pivoting like a motorcyclist skidding around a tight turn."The future applications for micro-electromechanical systems, or MEMS, include ensuring information security, such as assisting with network authentication and authorization; inspecting and making repairs to an integrated circuit; exploring hazardous environments, perhaps after a hazardous chemical explosion; or involving biotechnology, say to manipulate cells or tissues.

Smallest Robot :Meet the world’s smallest robot in KUALA LUMPUR

Standing at only 15cm tall and controlled by infrared, the robot is able to “perform” for two hours. For creator Chang Ho Yu, the robot, the size of a pocket pad, is a life-long dream come true.
“I have always been very interested in making robots, especially since my company discovered some very advanced technology from Japan three years ago. “But that technology was expensive, so we decided to create our own robot using cheaper technology,” said Chang, the general manager of Taiwan-based GeStream Technology Inc, at the exclusive preview of the robot here yesterday. With 16 degrees of freedom, which translates to 65536 motions, the Taiwanese humanoid robot is the smallest, lightest and the cheapest in the world. “This new version is suitable for robot games and it is easy for the owner to download and update the newest robot motions or movements,” he said, adding that it was also easy to assemble and disassemble the battery-operated robot. “Because the robot is sold in pieces, the owner will need to programme and learn to build his own robot. He can also add in any motion he wants to the robot,” said Chang. “In Taiwan, this robot is used as an educational tool for children. A lot of patience and discipline is needed in building a good robot.” To a question, Chang said that there were plans to modify the look of the robot and to add in voice control in the near future. “We can make it look cuter or more feminine or masculine and produce it in different colours,” he said. The robot is priced at about RM650 to RM700. The tiny machine makes its grand appearance at 3pm from Sept 6 to 8 at the 8th International Strategic Partnership & Business Networking Trade Fair for SMEs (Global SMEs 2007) to be held at Matrade Exhibition & Convention Centre KL.

robot research :Where Scientists Fear to Tread Robots go There

Ayanna Howard, an associate professor in the School of Electrical and Computer Engineering at Georgia Tech, with a SnoMote, a robot designed to gather scientific data in ice environments.
------------------------------------
ATLANTA (May 27, 2008) —Scientists are diligently working to understand how and why the world’s ice shelves are melting. While most of the data they need (temperatures, wind speed, humidity, radiation) can be obtained by satellite, it isn’t as accurate as good old-fashioned, on-site measurement and static ground-based weather stations don’t allow scientists to collect info from as many locations as they’d like.
Unfortunately, the locations in question are volatile ice sheets, possibly cracking, shifting and filling with water — not exactly a safe environment for scientists.To help scientists collect the more detailed data they need without risking scientists’ safety, researchers at the Georgia Institute of Technology, working with Pennsylvania State University, have created specially designed robots called SnoMotes to traverse these potentially dangerous ice environments. The SnoMotes work as a team, autonomously collaborating among themselves to cover all the necessary ground to gather assigned scientific measurements. Data gathered by the Snomotes could give scientists a better understanding of the important dynamics that influence the stability of ice sheets.
“In order to say with certainty how climate change affects the world’s ice, scientists need accurate data points to validate their climate models,” said Ayanna Howard, lead on the project and an associate professor in the School of Electrical and Computer Engineering at Georgia Tech. “Our goal was to create rovers that could gather more accurate data to help scientists create better climate models. It’s definitely science-driven robotics.”Howard unveiled the SnoMotes at the IEEE International Conference on Robotics and Automation (ICRA) in Pasadena on May 23. The SnoMotes will also be part of an exhibit at the Chicago Museum of Science and Industry in June. The research was funded by a grant from NASA’s Advanced Information Systems Technology (AIST) Program.Howard, who previously worked with rovers at NASA’s Jet Propulsion Laboratory, is working with Magnus Egerstedt, an associate professor in the School of Electrical and Computer Engineering, and Derrick Lampkin, an assistant professor in the Department of Geography at Penn State who studies ice sheets and how changes in climate contribute to changes in these large ice masses. Lampkin currently takes ice sheet measurements with satellite data and ground-based weather stations, but would prefer to use the more accurate data possible with the simultaneous ground measurements that efficient rovers can provide.“The changing mass of Greenland and Antarctica represents the largest unknown in predictions of global sea-level rise over the coming decades. Given the substantial impact these structures can have on future sea levels, improved monitoring of the ice sheet mass balance is of vital concern,” Lampkin said. “We’re developing a scale-adaptable, autonomous, mobile climate monitoring network capable of capturing a range of vital meteorological measurements that will be employed to augment the existing network and capture multi-scale processes under-sampled by current, stationary systems.”
The SnoMotes are autonomous robots and are not remote-controlled. They use cameras and sensors to navigate their environment. Though current prototype models don’t include a full range of sensors, the robots will eventually be equipped with all the sensors and instruments needed to take measurements specified by the scientist.While Howard’s team works on versatile robots with the mobility and Artificial Intelligence (A.I.) skills to complete missions, Lampkin’s team will be creating a sensor package for later versions of Howard’s rovers.Here’s how the SnoMotes will work when they’re ready for their glacial missions: The scientist will select a location for investigation and decide on a safe “base camp” from which to release the SnoMotes. The SnoMotes will then be programmed with their assigned coverage area and requested measurements. The researcher will monitor the SnoMotes’ progress and even reassign locations and data collection remotely from the camp as necessary.When Howard’s research team first set out to build a rover designed to capture environmental data from the field, it took a few tries to come up with an effectively hearty design. The group’s first rover was delicate and ineffective. But after an initial failure, they decided to move on to something designed for consistent abuse — a toy. Instead of building yet another expensive prototype, Howard instead opted to start with a sturdy kit snowmobile, already primed for snow conditions and designed for heavy use by a child.Howard’s group then installed a camera and all necessary computing and sensor equipment inside the 2-foot-long, 1-foot-wide snowmobile. The result was a sturdy but inexpensive rover.By using existing kits and adding a few extras like sensors, circuits, A.I. and a camera, the team was able to create an expendable rover that wouldn’t break a research team’s bank if it were lost during an experiment, Howard said. Similar rovers under development at other universities are much more expensive, and the cost of sending several units to canvas an area would likely be cost-prohibitive for most researchers, she added.The first phase of the project is focused primarily on testing the mobility and communications capabilities of the SnoMote rovers. Later versions of the rovers will include a more developed sensor package and larger rovers.The team has created three working SnoMote models so far, but as many SnoMotes as necessary can work together on a mission, Howard said.The SnoMote represents two key innovations in rovers: a new method of location and work allocation communication between robots and maneuvering in ice conditions.
Once placed on site, the robots place themselves at strategic locations to make sure all the assigned ground is covered. Howard and her team are testing two different methods that allow the robots to decide amongst themselves which positions they will take to get all the necessary measurements.The first is an “auction” system that lets the robots “bid” on a desired location, based on their proximity to the location (as they move) and how well their instruments are working or whether they have the necessary instrument (one may have a damaged wind sensor or another may have low battery power).The second method is more mathematical, fixing the robots to certain positions in a net of sorts that is then stretched to fit the targeted location. Magnus Egerstedt is working with Howard on this work allocation method.In addition to location assignments, another key innovation of the SnoMote is its ability to find its way in snow conditions. While most rovers can use rocks or other landmarks to guide their movement, snow conditions present an added challenge by restricting topography and color (everything is white) from its guidance systems.For snow conditions, one of Howard’s students discovered that the lines formed by snow banks could serve as markers to help the SnoMote track distance traveled, speed and direction. The SnoMote could also navigate via GPS if snow bank visuals aren’t available.While the SnoMotes are expected to pass their first real field test in Alaska next month, a heartier, more cold-resistant version will be needed for the Antarctic and other well below zero climates, Howard said. These new rovers would include a heater to keep circuitry warm enough to function and sturdy plastic exterior that wouldn’t become brittle in extreme cold.Related LinksHuman-Automation Systems Lab (HumAnS)School of Electrical and Computer Engineering at Georgia TechDr. Ayanna HowardDr. Derrick Lampkin
-----------
The Georgia Institute of Technology is one of the nation's premiere research universities. Ranked seventh among U.S. News & World Report's top public universities, Georgia Tech's more than 18,000 students are enrolled in its Colleges of Architecture, Computing, Engineering, Liberal Arts, Management and Sciences. Tech is among the nation's top producers of women and African-American engineers. The Institute offers research opportunities to both undergraduate and graduate students and is home to more than 100 interdisciplinary units plus the Georgia Tech Research Institute.

mini robot :Something smaller than a pin's head microrobots dance on

DURHAM, N.C. -- Microscopic robots crafted to maneuver separately without any obvious guidance are now assembling into self-organized structures after years of continuing research led by a Duke University computer scientist."It's marvelous to be able to do assembly and control at this fine a resolution with such very, very tiny things," said Bruce Donald, a Duke professor of computer science and biochemistry.Each microrobot is shaped something like a spatula but with dimensions measuring just microns, or millionths of a meter. They are almost 100 times smaller than any previous robotic designs of their kind and weigh even less, Donald added.Formally known as microelectromechanical system (MEMS) microrobots, the devices are of suitable scale for Lilliputian tasks such as moving around the interiors of laboratories-on-a-chip.In videos produced by the team, two microrobots can be seen pirouetting to the music of a Strauss waltz on a dance floor just 1 millimeter across. In another sequence, the devices pivot in a precise fashion whenever their boom-like steering arms are drawn down to the surface by an electric charge. This response resembles the way dirt bikers turn by extending a boot heel.New research summaries describe the group's latest accomplishment: getting five of the devices to group-maneuver in cooperation under the same control system."Our work constitutes the first implementation of an untethered, multi-microrobotic system," Donald's team writes in a report to be presented on June 1-2, 2008 during the Hilton Head Workshop on Solid State Sensors, Actuators and Microsystems in South Carolina.More comprehensive details on how the scientists achieve this "microassembly" will be published later in their report for the Journal of Microelectromechanical Systems.The research was funded by the National Institutes of Health and the Department of Homeland Security, and also included Donald's graduate student Igor Paprotny and Dartmouth College physicist Christopher Levey.Donald has been working on various versions of the MEMS microrobots since 1992, initially at Cornell and then at Stanford and Dartmouth before coming to Duke. The first versions were arrays of microorganism-mimicking ciliary arms that could "move objects such as microchips on top of them in the same way that a singer in a rock band will crowd surf," he said. "We made 15,000 silicon cilia in a square inch."A February 2006 report in the Journal of Microelectromechanical Systems by Donald, Paprotny, Levey and others detailed the basics of the current design: devices about 60 microns wide, 250 microns long and 10 microns high that each run off power scavenged from an electrified surface.Propelling themselves across such surfaces in an inchworm-like fashion impelled by a "scratch-drive" motion actuator, the microrobots advance in steps only 10 to 20 billionths of a meter each, but repeated as often as 20,000 times a second.The microrobots can be so small because they are not encumbered by leash-like tethers attached to an external control system. Built with microchip fabrication techniques, they are each designed to respond differently to the same single "global control signal" as voltages charge and discharge on their working parts.This global control is akin to ways proteins in cells respond to chemical signals, said Donald, who also uses computer algorithms to study processes in biochemistry and biology.In their new reports, the team shows that five of the microrobots can be made to advance, turn and circle together in pre-planned ways when each is built with slightly different dimensions and stiffness.Following a choreography mapped out with the aid of mathematics, the microdevices ultimately assemble into group micro-huddles that could set the stage for something more elaborate."Initially, we wanted to build something like a car that could drive around at the microscopic scale," Donald said. "Now what we've been able to do is create the first microscopic traffic jam."He said it took him and various colleagues from 1997 to 2002 to create a microrobot that can operate without a tether, three more years to make the devices steer under global control, and another three to independently maneuver more than one at a time."The hard thing was designing how multiple microrobots can all work independently, even while they receive the same power and control," he said.###Donald and other Duke researchers are now thinking of trying to enlist the maneuverable microrobots to insert tiny, billionths-of-a-meter electrodes called nanotubes into neural cells. The research will be supported by the Duke Institute for Brain Sciences.

Sunday, May 25, 2008

Robot : New Robot Walks Like A Human


Flame, the robot, walks the way humans walk. (Credit: Image courtesy of Delft University of Technology)
ScienceDaily (May 22, 2008) — Researcher Daan Hobbelen of TU Delft (The Netherlands) has developed a new, highly-advanced walking robot: Flame. This type of research, for which Hobbelen will receive his PhD on Friday 30 May, is important as it provides insight into how people walk. This can in turn help people with walking difficulties through improved diagnoses, training and rehabilitation equipment.

If you try to teach a robot to walk, you will discover just how complex an activity it is. Walking robots have been around since the seventies. The applied strategies can roughly be divided into two types. The first derives from the world of industrial robots, in which everything is fixed in routines, as is the case with factory robots. This approach can, where sufficient time and money are invested, produce excellent results, but there are major restrictions with regard to cost, energy consumption and flexibility.
Human
TU Delft is a pioneer of the other method used for constructing walking robots, which examines the way humans walk. This is really very similar to falling forward in a controlled fashion. Adopting this method replaces the cautious, rigid way in which robots walk with the more fluid, energy-efficient movement used by humans.
PhD student Daan Hobbelen has demonstrated for the first time that a robot can be both energy-efficient and highly stable. His breakthrough came in inventing a suitable method for measuring the stability of the way people walk for the first time. This is remarkable, as ‘falling forward’ is traditionally viewed as an unstable movement.
Next he built a new robot with which he was able to demonstrate the improved performance: Flame. Flame contains seven motors, an organ of balance and various algorithms which ensure its high level of stability.
For instance, the robot can apply the information provided by its organ of balance to place its feet slightly further apart in order to prevent a potential fall. According to Hobbelen, Flame is the most advanced walking robot in the world, at least in the category of robots which apply the human method of walking as a starting principle.


Rehabilitation


Modelling the walking process allows researchers to construct two-legged robots which walk more naturally. More insight into the walking process can in turn help people with walking difficulties, for example through improved diagnoses, training and rehabilitation equipment. TU Delft is working on this together with motion scientists at VU University Amsterdam.
Hobbelen cites ankles as an example. These joints are a type of spring which can be used to define the best level of elasticity. Research conducted by Hobbelen into Flame’s ankles has provided motion scientists with more insight into this topic.


Football-playing robots


Over the next few years, TU Delft intends to take major steps forward in research into walking robots. These include developing walking robots which can ‘learn’, see and run.
One very special part of the robot research concerns football-playing robots. On Thursday 29 May, together with the University of Twente, TU Eindhoven and Philips, TU Delft will present the Dutch RoboCup team which is to participate in the 2008 RoboCup Soccer in China this summer.
This presentation will take place at TU Delft during the international Dynamic Walking 2008 conference held from 26-29 May. Biomechanics experts, motion scientists and robot experts will come together at this event to exchange expertise on the walking process.


Adapted from materials provided by Delft University of Technology.