Self-Driving Cars Could Be Decades Away, No Matter What Elon Musk Said

You might have heard about the SpaceX CEO’s prediction that we could be driving on self-driving vehicles as early as five years from now. In a tweet to a Tesla owner on Twitter, Elon Musk said, “Probably nothing to worry about, but it’s not 0.” This has spurred many people to question Elon Musk and wonder if he’s telling the truth or lying.

Self-driving cars pose an existential threat to humanity. They could be decades away, no matter what Elon Musk says.

Elon Musk recently tweeted that self-driving cars could be available in five to 10 years, and that they would be much safer than human-driven vehicles. But many experts say that’s not true. As Wired reported last week, some experts say that even if a self-driving car is safer than a human-driven one, we won’t know for sure until we’re on the road with them.. Read more about best self driving car and let us know what you think.

2016, Lyft CEO John Zimmer predicted that by 2025, people would no longer own cars at all. Some experts don’t know when, if ever, people will be able to buy steerless cars that pull themselves out of a parking space in 2021. Unlike investors and CEOs, scientists who study artificial intelligence, systems engineering and autonomous technologies have long argued that it will take many years, if not decades, to develop a fully autonomous car. Some go even further and say that despite the $80 billion already invested, we may never get the promised self-driving cars. At least not without a major breakthrough in artificial intelligence, which nobody expects in the short term, nor without a complete transformation of our cities. Even those most enthusiastic about the technology – in 2019, Musk doubled down on his previous predictions by saying Tesla’s autonomous robot taxis would be here by 2020 – are starting to openly acknowledge that skeptical experts may well be right. Most of the real world AI problems need to be solved for the full non-autonomous, generalized self-driving car to work, Musk himself recently tweeted. Translation: To make a car drive like a human, researchers need to create artificial intelligence on a human scale. Researchers and scientists in this field will tell you that we have no idea how to do this. Mr Musk, for his part, seems to believe that Tesla will get there. He continues to promote next-generation autonomous driving technology, which is actually a deceptively named driver assistance system that is currently in beta testing. A recent article titled Why AI is more complex than we think sums it up. In it, Melanie Mitchell, a computer scientist and professor of complexity at the Santa Fe Institute, notes that as the timeline for autonomous cars is thwarted, industry players are redefining the term. Because these vehicles require a geographically limited test site and ideal weather conditions – not to mention safe drivers or at least remote controls – the inventors and proponents of these vehicles have built all these limitations into their definition of autonomy. Even with all the star power, Dr. Mitchell writes, none of these predictions came true. In the cars you can buy, autonomous driving has turned out to be nothing more than advanced cruise control, like GM’s Super Cruise or the optimistically named Tesla Autopilot. San Francisco based GM Cruise is testing autonomous cars without a driver behind the wheel, but with a human monitoring the vehicle from the back seat. In the US, there is only one commercial robot taxi service without a human driver – a small service limited to low-density areas in the Phoenix metropolitan area run by Alphabet subsidiary Waymo.

The autonomous vehicle of General Motors’ Cruise subsidiary during a test drive in San Francisco in 2019.

Photo: Andrey Sokolov/dpa/picture alliance/Getty Images Yet Waymo cars have been involved in minor accidents where they were hit from behind, and their incomprehensible (to humans) behavior has been cited as a possible cause. One of them was recently hit by traffic cones at a construction site. I don’t know if we’re more likely to crash or hit a car than a human driver, says Nathaniel Fairfield, software engineer and head of the behavioral group at Waymo. The company’s self-driving cars are programmed to be safe – the opposite of the canonical teenage driver, he adds. Chris Urmson. is the head of autonomous trucking startup Aurora, which recently acquired Uber’s self-driving car division. (Uber also invested $400 million in Aurora). In the next few years, we will see autonomous cars on the road performing useful tasks, but it will take time for them to become ubiquitous, he says. word-image-6504

Initially, the Aurora vehicles will only be used on highways, for which the company has already created a high-resolution 3D map.

Photo: Aurora Urmson says the key to Aurora’s initial implementation will be that it will only work on highways where the company has already created a high-resolution 3D map. Eventually, Aurora’s goal is for trucks and cars using its systems to travel much farther than the highways where they were first introduced, but Urmson declined to say when that might happen. The slow spread of limited and permanently human-controlled autonomous vehicles was predictable and even foreseeable a few years ago But some executives and engineers have argued that new autonomous driving capabilities would emerge if these systems could get enough miles on the road. Today, some believe that all the test data in the world cannot compensate for the fundamental shortcomings of AI. According to Mary Cummings, professor of computer science and director of the People and Autonomy Lab at Duke University, who has advised the Department of Defense on AI, decades of breakthroughs in artificial intelligence, known as machine learning, have led only to the most primitive forms of intelligence. To evaluate modern machine learning systems, she developed a four-level AI complexity scale. The simplest form of reflection begins with bottom-up, competency-based reflection. Today’s artificial intelligences learn very well not to deviate from the line on the highway. The next step is rule-based learning and reasoning (e.g., what to do at a stop sign). Next comes knowledge-based reasoning. (Is it still a stop sign if it’s half hidden by a tree branch?). And at the very top of the list is expert thinking: the uniquely human ability to place oneself in a totally new scenario and use one’s knowledge, experience and skills to come out unscathed. Problems with driverless cars occur only at this third level. Current deep learning algorithms – the elite of machine learning – cannot create knowledge-based representations of the world, says Dr. Cummings. And engineers’ attempts to compensate for this shortcoming – such as creating super-detailed maps to fill in gaps in sensor data – typically aren’t updated often enough to guide the vehicle through all possible situations, such as a collision with an unfamiliar construction site. Machine learning systems, which are great for comparing patterns, are terrible at extrapolation, i.e., transferring what has been learned from one domain to another. For example, you may B. Identify a snowman on the side of the road as a possible pedestrian, but fail to recognize that it is actually an inanimate object that is unlikely to cross the road. As a child, you are taught that the stove is hot, says Dr. Cummings. But AI does not transfer knowledge from one file to another very well, she adds. You’ll have to teach it to every existing record. Some researchers at the Massachusetts Institute of Technology are trying to bridge this gap by going back to basics. They have gone to great lengths to understand how children learn from a technical perspective, to apply this to future artificial intelligence systems. Billions of dollars have been invested in the self-driving car industry, and they’re not getting what they hoped for, says Dr Cummings. That’s not to say we won’t eventually have a self-driving car in some form, she says. It won’t be what everyone promised. But, she adds, with small, slow-moving shuttles that operate in well-mapped areas and are equipped with sensors such as lidar, engineers could reduce uncertainty to a level acceptable to regulators and the public. (Consider, for example, shuttles to and from the airport on specially paved paths). word-image-6505

Nathaniel Fairfield, a software engineer and head of Waymo’s behavioral team, says his team sees no fundamental technological barriers to the large-scale deployment of self-driving robot taxi services like those offered by his company.

Photo: Caitlin O’Hara/REUTERS Sir, I want to thank you for your support. Waymo’s Fairfield says his team sees no fundamental technological barriers to widespread self-driving robot taxi services like those offered by his company. If you are too conservative and ignore reality, you say it will take 30 years, but it won’t, he adds. More and more experts believe that the road to full autonomy will not be through artificial intelligence. Engineers have solved countless other complex problems – including the landing of spacecraft on Mars – by breaking the problem down into smaller parts, so that smart people can build systems to address each part. Raj Rajkumar, a professor of engineering at Carnegie Mellon University who has long studied self-driving cars, is optimistic about this path. It won’t happen overnight, but I see the light at the end of the tunnel, he says. That’s the main strategy Waymo is pursuing to get its autonomous shuttles on the road, and that’s why we don’t think we need full-blown AI to solve the driving problem, Fairfield said. Aurora’s Urmson says his company combines AI with other technologies to create systems that can apply general rules to new situations, just as a human would.


When do you think we will see fully autonomous cars? Join the discussion below. According to Dr. Mitchell, the transition to old-fashioned autonomous vehicles, which use proven systems technology, would still require huge sums of money to equip our roads with amplifiers and sensors to guide and correct the robotic vehicles. And they will still be limited to certain areas and weather conditions, with human cameramen on standby in case of trouble, she adds. This Disney animatronic version of our future of autonomous driving would be a far cry from the creation of an artificial intelligence that could simply be placed in any car and instantly replace the human driver. This could lead to safer human-driven cars and fully autonomous cars in a few closely watched areas. But that won’t be the end of car ownership – not in the near future. – For more technology analysis, reviews, tips and headlines from the WSJ, sign up for our weekly newsletter. Email Christopher Mims at [email protected] Copyright ©2020 Dow Jones & Company, Inc. All rights reserved. 87990cbe856818d5eddac44c7b1cdeb8Once upon a time, there was a “wizard” named Elon Musk who said a self-driving car would be the way of the future. This was a major talking point in 2016. A year later, Musk—who is CEO of Tesla—said the same thing. Turns out he was wrong.. Read more about robotaxi and let us know what you think.

best self driving cargoogle self driving carwaymo self driving carrobotaxi,People also search for,Privacy settings,How Search works,best self driving car,google self driving car,waymo self driving car,robotaxi

You May Also Like