You are currently browsing the category archive for the ‘Robots’ category.

I don’t know enough about the science to tell whether this is a significant step forward in neural-interface systems, or just the specific potential of neural-interface systems to aid paralyzed individuals, but this report from the NYT is encouraging in any case:

Two people who are virtually paralyzed from the neck down have learned to manipulate a robotic arm with just their thoughts, using it to reach out and grab objects. One of them, a woman, was able to retrieve a bottle containing coffee and drink it from a straw — the first time she had served herself since her stroke 15 years earlier, scientists reported on Wednesday….

Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.

The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

I think there is a non-trivial probability that future computer interface using only our minds will be popular (I don’t think whether it will be possible is much of a question anymore). I’m not sure this will replace current inputs entirely, as the telephone did to morse code, or just compliment them, as the mouse did to the keyboard. In either case I think it adoption of such technology will go hand-in-hand with the continuing integration of brans and computers. Psychologically, I think controlling computers with your mind will make computer memories feel much more like actual memories, and will blur the line between the two further.  After all, having no manual inputs means the entire process will occur internally with the appearance of moving parts in the real world: simply think what you want to know, and have what you want to know appear floating in front of your face (augmented reality). I predict this will feel very different from even just waving your hands in the air Minority Report style.

ADDED: Mark Thoma has what looks like a very interesting video on all of this from the Milken Institute. Fast forward to around the 50 minute mark for some amazing footage.

ADDED II: From MathDR in the comments, apparently Doc Oc will be real some day:

I spoke with the DARPA program manager regarding this project (this was a while ago) and I remember his statement about the impact of this research: Basically when doing experiments on monkeys with no impairment of limbs (they constrained the monkey arm with a sling to inhibit movement and force it to use the robotic arm), the question was what would happen then the sling was removed.

The monkey responded by utilizing both of its arms normally AND the robotic arm as a THIRD arm. This implies (extending to humans) that we would be able to *extend* our anatomy to multiple appendages and maybe even other toolsets (surgical tools on the end of appendages, etc.)

 

These are two things I’ve written about lately but wanted to draw an explicit parallel between. First is Andrew McAfee and Erik Brynjolfsson’s and Race Against the Machines, which argued that technology is progressing so quickly that it “confounds expectations and intuitions”. The part I want to address in particular is where they try and predict the jobs in which humans have the most sustainable comparative advantages. In addition to problem solving and creativity, they cite manual work:

If, as these examples indicate, both pattern recognition and complex communication are now so amenable to automation, are any human skills immune? Do people have any sustainable comparative advantage as we head ever deeper into the second half of the chessboard? In the physical domain, it seems that we do for the time being. Humanoid robots are still quite primitive, with poor fine motor skills and a habit of falling down stairs. So it doesn’t appear that gardners and restaurant busboys are in danger of being replaced by machines any time soon.

The second thing I’ve written about that I want to connect to this is DARPA’s new grand challenge. This contest is very specifically seeking to address this disadvantage that robots have compared to humans. Here are the tasks a robot will have to complete to win the challenge:

1. Drive a utility vehicle at the site.
2. Travel dismounted across rubble.
3. Remove debris blocking an entryway.
4. Open a door and enter a building.
5. Climb an industrial ladder and traverse an industrial walkway.
6. Use a power tool to break through a concrete panel.
7. Locate and close a valve near a leaking pipe.
8. Replace a component such as a cooling pump.

Sorry gardners and busboys. A robot that can do all of these can weed a garden and clear a table. Oh, and that part about robots falling down stairs? Here is a new video from DARPA showcasing a robot  that “is expected to be used as government-funded equipment (GFE) for performers in Tracks B and C of the DARPA Robotics Challenge.”

It is appropriate that the book about how machines are outperforming our expectations is having its expectations outperformed by machines.

Over at The Atlantic, where we will be guest blogging for Megan, I have a piece about the progress of technology and why we should all be futurists now. One sign of progress is improvement of autonomous vehicles, and the DARPA Grand Challenge, which paid million dollar cash rewards, played a big part in helping that along. Now DARPA has a new challenge for humanoid robotics. Here is a summary of the challenge:

The goal of this Grand Challenge is to create a humanoid robot that can operate in an environment built for people and use tools made for people. The specific challenge is built around an industrial disaster response.

There are also details on what the humanoid robot will be required to do:

1) The robot will maneuver to a open frame utility vehicle, such as a John Deere Gator or a Polaris Ranger. The robot is to get into the driver’s seat and drive it to a specified location.

2) The robot is to get out of the vehicle, maneuver to a locked door, unlock it with a key, open the door, and go inside.

3) The robot will traverse a 100 meter, rubble strewn hallway.

4) At the end of the hallway, the robot will climb an ladder.

5) The robot will locate a pipe that is leaking a yellow-colored gas (non-toxic, non-corrosive). The robot will then identify a valve that will seal the pipe and actuate that valve, sealing the pipe.

6) The robot will locate a broken pump and replace it.

Hopefully this will spur progress on humanoid robotics like the other Grand Challenges have spurred progress in autonomous vehicles.

I think a big part of why people struggle to imagine a world where cars drive themselves is they believe too few people will want it, at least in this country. There needs to be a minimum number of customers to support both the technology and the political will to pass laws allowing something that many will instinctively feel is dangerous. So how do we get from here to there? Garrett Jones proposes one way that I think is persuasive:

A thin edge of the wedge for Google Cars: an alternative to driver’s licenses for some of the elderly. Voter demand meets tech solution.

If there is one group is this country with the money, the demand,  and the political influence to get us driverless cars it is the elderly. Another constituency will be the disabled, as illustrated in the following video of a blind man riding in a Google driverless car:

Many see driverless cars as a solution to a problem we don’t have, but in fact for many this would be an extremely liberating technological advance.

People have complained that Amazon factory warehouse jobs are marked by poor conditions and low pay, but this may be less of a problem in the not to distant future. Amazon has acquired robots maker Kiva systems for $775 million and they are planning on replacing warehouse workers with autonomous robots. The New York Times reports:

…Kiva Systems’ orange robots are designed to move around warehouses and stock shelves.

Or, as the company says on its Web site, using “hundreds of autonomous mobile robots,” Kiva Systems “enables extremely fast cycle times with reduced labor requirements.”

In other words, these robots will most likely replace human workers in Amazon’s warehouses.

Despite the ugly conditions that can reportedly occur at Amazon warehouses, I don’t think the workers will be better off when these jobs are replaced by robots. The article also reports on the general trend of Robots Are Stealing Our Jobs:

Robots have been in factories for decades. But increasingly we will see them out in the open. Already little ones — toys, really — sweep floors. But they are getting better at doing what we do. Soon, if Google’s efforts to create driverless cars are successful, cab drivers, cross-country truckers and even ambulance drivers could be out of a job, replaced by a computer in the driver’s seat.

In the video below you can see Kiva robots perform The Nutcracker.