Slot Gacor
AI Archives ✔️ News For Finance
Home Archive by category AI

The skateboarder was eyeing a self-driving car and decided that it might be quite rad to try and hitch a ride.

The overall act of skateboard hitching has lots of slang names such as skitching, skate-hitching, bumper hitching, bizzing, bumper shining, and so on. One thing for sure is that it can be assuredly labeled as ultra-dangerous and an altogether bad idea.

You might be tempted to think that a skateboarder that rides up along a moving car and grabs onto the vehicle is perhaps demonstrating tremendous prowess as a skater or boarder. There are plenty of online videos and amateur social media postings that seem to highlight this crazy act. The person riding the skateboard is usually waving recklessly at the camera and acting like they are having the time of their life.

Oftentimes, these brazen efforts are done without a so-called brain bucket (that’s lingo for wearing a helmet).

Acrid critics are quick to point out that those trying to do these inappropriate stunts are probably brainless to start with, thus the omission of a helmet is (smarmily) suggested as befitting the circumstance. Anyway, without getting into endless name-calling, some also point out that you don’t see the number of times that the skateboarders took a fall, including suffering an injurious abject face plant into the unforgiving street and experiencing a total and calamitous wipeout.

Many people though do customarily think fondly of this skate-hitching activity overall, perhaps as a result of having seen Michael J. Fox adroitly do so in the famous Back To The Future movie series. You might hazily recall that there is a scene in the movie (spoiler alert!) that entails his fictional character, Marty McFly, riding on a skateboard and grabbing onto the bumper of a car for a bit of a relatively short joyride. The funny joke underlying the brash act is that it is repeated in the present, the past, and the future. It is a quite memorable gag and a clever way to weave into the plot those seemingly everyday activities of life that tie together the passage of time.

The thing is, some might construe that wanton action as perfectly fine and merely shrug off the underlying perils that it entails. A plethora of video games portray bumper hitching in various heroic fashions, as do lots of shows on TV and cable.

Not wanting to sound like a fuddy-duddy, but you’ve got to admit that this is a hazardous and ill-advised skateboarding ploy.

What can go wrong?


The skateboarder can fall and get injured. This can be magnified immensely due to the speeds involved.

If you fall from a skateboard while going at an ordinary walking speed of a few miles per hour, you might not especially get broken apart. Possibly scrapes and bruises, maybe some broken bones. It isn’t pretty. Grasping to a car that is increasing speed and could end up reaching twenty or thirty miles per hour is not simply going to be an injurious fall, it could be a death sentence kind of fall.

Take a look online and you’ll see reported deaths when the skateboarder was hitching onto cars going somewhere between even a seemingly piddling five and ten miles per hour. This doesn’t seem to be fast-moving, but a human that takes a tumble is going to, unfortunately, discover that the human body is not made for those types of smacking blows at those speeds.

The obvious danger is landing on the harsh asphalt at a high rate of impact. There is also the consequent rolling and tumbling that can be hazardous to your health too. You could uncontrollably tumble into a parked car. The parked car will win that type of collision, and you will lose.

Another scary aspect is that you might roll under the very car that is towing you. In that case, the wheels and undercarriage can scrape and drag you along. This is bound to do irreparable damage to you. The driver of the car might suddenly realize what is taking place and try to hit the brakes, but the odds are that you’ll have nonetheless gotten death-inducing flopping despite the car coming to a screeching halt.

There is also the chance of your falling into the path of another nearby in-motion car.

The driver of that vehicle might not see you on the ground or simply assume that you rolled on past. Alas, they might not realize that you’ve gotten actively entangled under their vehicle. I realize that it seems veritably impossible for a driver to not know when they have a person being dragged along under their car, yet this does happen. No sense in trying to roll the dice on whether other drivers will be paying attention to the roadway and not driving distracted when watching an enchanting cat video while at the wheel.

We probably can presume that a skateboarder knows what they are doing when they attempt one of these risky bumper hitching maneuvers. Some skateboarders are likely to underestimate the risks, especially for teenagers that tend to consider themselves invincible and have a carefree attitude. Having seen it done by skateboarding celebrities and stars, the trickery seems doable, and the excitement or thrill of the ride is extraordinarily alluring.

What about the driver?

The driver is assumed to be a licensed driver and therefore a full-fledged adult or at least a young adult that has been granted the government-divined privilege to drive on our public roadways. You could assert that the driver is equally at fault in this gambit of doing bumper hitching. Maybe more so, since a licensed driver is supposed to know about the dangers of driving and how people can get hurt.

Well, sometimes the driver is unaware of the bumper hitching rider.

The driver was driving along, minding their own business. Suddenly, out of nowhere, a skateboarder swiftly gets their board underway and catches up with the rear of the vehicle, thereupon grabbing the bumper and perhaps stooping or crouching down to not be seen.

If you are shocked to believe that a driver would not know that someone is hanging off the back of their car, you might want to ponder that aspect for a moment. I dare say this could happen to any of us. Your attention as a driver is primarily about the road ahead. You earnestly stare at the upcoming traffic. Sure, you look to the sides of your vehicle too, and you glance in your rearview mirror, though your principal focus is straight ahead.

I submit though that if someone wanted to be sneaky and came up behind your car, they could potentially hitch a ride. Unless you were intentionally on the look for such actions, it could readily occur, and you would not know at first that it was happening.

You might be of the resolute opinion that the extra weight and dragging of the grasping skateboarder would undoubtedly be felt while you are at the wheel of the car. This seems doubtful. The skateboard is wheeled. The amount of combined weight and counter forces of the skateboarder are negligible to the power of the car. They are like a flea on a camel’s back.

There is something that you might notice.

The sound of the skateboard as it is clicking and clacking along the street surface would be potentially heard by the driver. Of course, modern cars are purposely engineered to provide a noise-free interior. Plus, most people have their car radio going or employ a full-on entertainment system blaring as they drive along. Hearing that bumper hitcher is not necessarily a sure thing.

Then again, if the skateboarder is whooping and hollering at the top of their lungs, and if pedestrians are staring, pointing, and yelling that a person is hanging onto your car, perhaps that would be a clue for even the most clueless of drivers.

Imagine what you might do if the outside world suddenly brought to your attention that a bumper hitching act was taking place on your very car.

It is hard to say how drivers would react.

Some drivers might calmly slow down and come to a gradual halt. Other drivers might jam on the brakes. Some drivers keep going and figure that the bumper hitcher is getting a free ride and can merely opt to drop off the vehicle whenever they wish to do so. One supposes too that there are devilish drivers that might think to hit the gas, accelerating, doing so to scare the dickens out of the errant skateboarder and get them to unleash their grasp (or, worse still, forcefully fling them from the car).

The unpredictability of what an unaware driver will do is yet another of the many adverse risks entailing the bumper hitching scheme (note that this doesn’t imply that a driver knowing what is taking place will somehow be any safer).

We’ve focused on the notion of bumper hitching as taking place solely via accessing the bumper of a car. That doesn’t have to be the case. The phrase still applies when a skateboarder opts to grab onto the trunk, or onto a side-view mirror, or onto a door handle, or pretty much grasp any part of the car that they can get a hold of.

The other variation to keep in mind is that the rider doesn’t necessarily need to be using a skateboard. Sometimes a hitched ride is done by a person wearing skates. It can be done by people riding on a bicycle, or on a scooter, and so on. Anyone that has some form of wheeled contraption is a potential bumper hitching rider.

Is it illegal to do these kinds of bumper hitching acts?

First, to clarify, whether or not it perchance is illegal, it is not an act that should be performed. Let’s make that abundantly clear.

Unless you are on a special closed track, and have trained professionals, and are stunt experts, only then should bumper hitching be even a morsel of an idea. The professionals that do take special precautions and are well-aware of the risks, and yet some complain that they are nonetheless tending to glorify the act and making it seem insider cool.

Yes, they often alert others to not try this at home, but that is still going to inspire the at-home bumper hitching efforts, some lament (the counterargument is that people are going to do these antics anyway, regardless of seeing professionals do so, and therefore it is a helpful outlet to have professionals perform such stunts as an alternative to people trying to do so at home).

The legality of doing a bumper hitching is somewhat murky, here’s why.

Some states indicate in their vehicle code that you cannot do a bumper hitching while making use of a bicycle, nor when roller skating, and yet do not specifically mention skateboards. In that case, some skateboarders try to argue that because it isn’t specifically cited as a particular form of transport in the list of banned possibilities, it ergo must be assumed as legally permissible.

Best to have a serious talk with a lawyer about that.

Some states do explicitly mention skateboards as a mode of transport that cannot be used for undertaking bumper hitching. Thus, it is a relatively straightforward conclusion that doing such an act is bound to be seen as illegal in that state. In addition, many localities have various ordinances that declare any skateboard bumper hitching is considered illegal.

Plenty of other twists and turns exist.

For example, sometimes the law indicates that skateboards cannot be used in the street, and only used on sidewalks. Think about that. If you use a skateboard to bumper hitch, and presumably you would need to be in the street to do so, you are violating the provision of not riding a skateboard while in the street. It kind of shunts to the side that you were bumper hitching and instead catches the rider with the crafty caveat that you aren’t supposed to be skateboarding in the street, to begin with.

Without seeming to be overly repetitive, we ought to not be trying to figure out these minuscule head-of-a-pin kinds of twists, and instead lucidly agree that no one should ordinarily be doing any kind of bumper hitching.

Shifting gears for a moment, the future of cars will entail the advent of self-driving cars. Self-driving cars are going to be using AI-based driving systems and there won’t be a human driver at the wheel (see my extensive coverage of self-driving cars at this link here).

Here is an intriguing question: Could AI-based true self-driving cars become fodder for those skateboarders that wish to do the act of bumper hitching?

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Bumper Hitching

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can (see my explanation at this link here).

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about bumper hitching. This is an aspect that needs to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

We’ll begin by dividing the matter into two separate conditions.

There is the condition of having an automaker or self-driving tech maker purposely devising a capability within the AI driving system for the overt detection of a bumper hitching instance. The other condition consists of an “everyday” self-driving car that has not been intentionally programmed per se for the detection of a bumper hitching instance.

The first listed condition is the easiest to discuss.

Developers of the AI driving system could make use of the existing sensor suite to try and detect that a bumper hitching act is underway. This could include using the onboard image processing capability to try and identify that someone is attempting to hitch a ride on the self-driving car. Generally, the video cameras that are streaming images into the AI driving system would be used to look for a person or persons that are approaching the vehicle and appear to be trying to grab hold of the self-driving car.

Don’t falsely assume that this is an easy task for the object recognition image processing subcomponent. A person outside the vehicle might be very sneaky, perhaps hiding behind a parked car and suddenly dart into the street. There might be little advance notification of the bumper hitcher getting into action.

Once the bumper hitcher grabs onto the vehicle, it might be hard to see them via the cameras. The cameras are usually aimed rather outward, wanting to capture a wide scene and be able to collect data about where other cars are, where the sidewalk is, and so on. It is rarer that a camera would be angled down toward the bumper.

There is also the problem of potentially generating false alerts. If a pedestrian perchance came into the street and stood near the self-driving car, this could inadvertently be construed as a bumper hitching activity that is about to get underway. A human driver would realize that the pedestrian is not holding a skateboard and nor standing on one, which is something that the image processing software would need to be programmed to look for.

Other sensors could come to play, assuming that the self-driving car is equipped with them. For example, the radar and LIDAR could potentially detect the presence of the bumper hitcher. This though is also bound to be tough to do when once the skateboarder is in a crouched position and hugging to the side of the car. The odds are that neither the radar and nor the LIDAR would readily pick up that an object is that close to the vehicle.

Another possibility involves ultrasonic sensory devices. These are usually used for aiding the AI driving system when the vehicle is being parked. The units can detect nearby objects, such as when doing parallel parking or if the vehicle is attempting to park and an object such as a shopping cart might be in the way. Depending though on the speed of the car and where it is in traffic, trying to use the ultrasonic units for detecting a moving skateboarder that has grabbed onto the vehicle could be somewhat problematic.

Given the rarity of someone doing a bumper hitching ride, the argument could be made that it makes little sense to try and program the AI driving system to cope with this outlier possibility. Most of the developers and engineers have their hands full with simply trying to get self-driving cars to safely go from point A to point B. This kind of edge or corner case would likely be listed as a low priority that eventually might come up for being worked on.

You can bet though that if someone does do a bumper hitching with a self-driving car and gets injured, there will be a huge outcry from regulators and the public about how this could have been allowed to happen. In that sense, the seemingly low priority of detecting and responding to a bumper hitching activity will become a vocally outsized must-be-handled issue, causing the automakers and self-driving tech firms to scramble to make up for the lost time.

Imagine that a bumper hitching detection capability did exist and was reliable enough to be utilized on an ongoing basis. A vexing question arises as to what to about the wayward skateboarder. What would we want the AI driving system to do?

I’m sure you might suggest that the answer is obvious, namely the AI driving system should stop the car. Sure, that generally makes sense, but the devil is in the details. How quickly should the vehicle be brought to a stop? Suppose the skateboarder falls while the stopping action is underway, then what? Should the AI somehow try to warn the skateboarder that the AI driving system has detected their presence and is going to stop? Etc.

This brings up another interesting and somewhat beguiling topic. Assume that in some instances there might be passengers inside the self-driving car. This complicates matters. If the AI driving system suddenly slams on the brakes, this could harm the passengers (this is a variant of the famous Trolley Problem, see my coverage at this link here).

I’ll add another wrinkle for you to contemplate. Suppose a passenger notices the skateboarder and does so before or in lieu of the AI driving system making such a detection. If the passenger tries to tell the AI driving system that a person is grabbing onto the vehicle, will the AI driving system be able to make use of that exhortation?

Most of the existing self-driving cars have very rudimentary Natural Language Processing (NLP) capabilities, akin to the likes of Alexa and Siri. Those NLP are not likely at this time to be capable of hearing a human exclaim that a skateboarder is hugging the car and then turn that into something usable as a part of the AI driving system activities. Right now, any kind of utterances beyond the desired destination are routed to a remote agent of the fleet operator, which in the case of the bumper hitching could be too little, too late, in terms of the AI driving system getting directed about what to do.


I mentioned that there were two major conditions to be considered. One was that the AI driving system was purposely augmented or devised to detect and cope with a bumper hitching vagabond. We’ve taken a close look at the ups and downs of that programming.

The other condition entails an “everyday” self-driving car that has no special provision for this particular use case. What would the de facto self-driving car that has no specific programming for this situation be able to potentially do?

An apparent aspect is that without devoted programming, the odds are lessened that the AI driving system is going to “natively” (as generically crafted) be able to do much about being able to detect the bumper hitching nomad. There is a slim chance of doing so, and perhaps the AI driving system would enter into its standard programming for any such exigency, such as gradually bringing the car to a halt, along with notifying the passengers that the vehicle is being stopped and sending an electronic alert to the fleet operator.

The trouble for the skateboarder is that they might be able to get away with the bumper hitching, just as it might be possible to do so with a human driver at the wheel, but the skateboarder is taking a huge risk in doing so. Envision that the self-driving car opts to increase speed since it is entering into a major highway, and all of sudden the skateboarder is faced with having to decide whether to let go or not. If they summarily let go, their heightened speed puts them at an intensified risk.

The skateboarder might try to feverishly bang on the trunk or the sides of the car, clamoring frantically to have someone or something slow down the vehicle. A human driver might or might not hear this, and might or might not accede to the request. Similarly, the AI driving system might or might not detect this (generally, currently unlikely to do so), and the skateboarder has now led themselves into a worsening and perilous catastrophe. The hitching rider has somewhat dug their own grave, as it were.

Be resolved: Just don’t do any bumper hitching and ergo avoid getting into dire straights altogether.

Recall that I began this discussion by mentioning that a skateboarder was eyeing a self-driving car and appeared to be mulling over an attempt at undertaking a bumper hitching ride. Upon observing this, I was thankfully relieved to witness that the self-driving car was moving at such a speed that the bumper hitcher was unable to reach the vehicle. You could suggest that the AI driving system was scooting away to avoid the skateboarder, but that is an unlikely explanation in this particular instance.

On this occasion, the self-driving car was most likely merely proceeding on its driving journey.

To that degree, the skateboarder was quite lucky. Whether it was fate or blind luck, their high-risk ill-advised attempt was rebuffed or at least not able to be completed. This was a case wherein the fish that got away was better off for all and averted what could have been a gut-wrenching and atrocious outcome.

I am hoping that the teenaged skateboarder will live a long life and maybe, just maybe, will one day be involved in the development of AI-based true self-driving cars, possibly getting assigned to figuring out how to try and ensure that AI driving systems can cope with those misbehaving rad-seeking skateboarders (which, he would certainly know a lot about).

Score one for humanity and AI.

Gnar gnar.

Ford is adding artificial intelligence to its robotic assembly lines.
Enlarge / Ford is adding artificial intelligence to its robotic assembly lines.

In 1913, Henry Ford revolutionized car-making with the first moving assembly line, an innovation that made piecing together new vehicles faster and more efficient. Some hundred years later, Ford is now using artificial intelligence to eke more speed out of today’s manufacturing lines.

At a Ford Transmission Plant in Livonia, Mich., the station where robots help assemble torque converters now includes a system that uses AI to learn from previous attempts how to wiggle the pieces into place most efficiently. Inside a large safety cage, robot arms wheel around grasping circular pieces of metal, each about the diameter of a dinner plate, from a conveyor and slot them together.

Ford uses technology from a startup called Symbio Robotics that looks at the past few hundred attempts to determine which approaches and motions appeared to work best. A computer sitting just outside the cage shows Symbio’s technology sensing and controlling the arms. Toyota and Nissan are using the same tech to improve the efficiency of their production lines.

The technology allows this part of the assembly line to run 15 percent faster, a significant improvement in automotive manufacturing where thin profit margins depend heavily on manufacturing efficiencies.

“I personally think it is going to be something of the future,” says Lon Van Geloven, production manager at the Livonia plant. He says Ford plans to explore whether to use the technology in other factories. Van Geloven says the technology can be used anywhere it’s possible for a computer to learn from feeling how things fit together. “There are plenty of those applications,” he says.

AI is often viewed as a disruptive and transformative technology, but the Livonia torque setup illustrates how AI may creep into industrial processes in gradual and often imperceptible ways.

Automotive manufacturing is already heavily automated, but the robots that help assemble, weld, and paint vehicles are essentially powerful, precise automatons that endlessly repeat the same task but lack any ability to understand or react to their surroundings.

Adding more automation is challenging. The jobs that remain out of reach for machines include tasks like feeding flexible wiring through a car’s dashboard and body. In 2018, Elon Musk blamed Tesla Model 3 production delays on the decision to rely more heavily on automation in manufacturing.

Researchers and startups are exploring ways for AI to give robots more capabilities, for example enabling them to perceive and grasp even unfamiliar objects moving along conveyor belts. The Ford example shows how existing machinery can often be improved by introducing simple sensing and learning capabilities.

“This is very valuable,” says Cheryl Xu, a professor at North Carolina State University who works on manufacturing technologies. She adds that her students are exploring ways that machine learning can improve the efficiency of automated systems.

One key challenge, Xu says, is that each manufacturing process is unique and will require automation to be used in specific ways. Some machine learning methods can be unpredictable, she notes, and increased use of AI introduces new cybersecurity challenges.

The potential for AI to fine-tune industrial processes is huge, says Timothy Chan, a professor of mechanical and industrial engineering at the University of Toronto. He says AI is increasingly being used for quality control in manufacturing, since computer vision algorithms can be trained to spot defects in products or problems on production lines. Similar technology can help enforce safety rules, spotting when someone is not wearing the correct safety gear, for instance.

Chan says the key challenge for manufacturers is integrating new technology into a workflow without disrupting productivity. He also says it can be difficult if the workforce is not used to working with advanced computerized systems.

This doesn’t seem to be a problem in Livonia. Van Geloven, the Ford production manager, believes that consumer gadgets such as smartphones and game consoles have made workers more tech savvy. And for all the talk about AI taking blue collar jobs, he notes that this isn’t an issue when AI is used to improve the performance of existing automation. “Manpower is actually very important,” he says.

This story originally appeared on

A recent UNESCO report reveals that most popular voice-based conversational agents are designed to be female. In the report, it outlines the potentially harmful effects gender bias in artificial intelligence, specifically chatbots, can have on society. 

However, the report focuses primarily on voice-based conversational agents and the analysis did not include chatbots (i.e., text-based conversational agents). In fact, most research about gender bias in conversational AI focuses only on voice applications.  

Chatbots can also be gendered in their design. 

According to Investopedia, “Chatbot, short for chatterbot, is an artificial intelligence (AI) feature that can be embedded and used through any major messaging applications…The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal.”

Researchers at the Institute of Information Systems and Marketing (IISM) found that “gender-specific cues are commonly used in the design of chatbots and that most chatbots are – explicitly or implicitly – designed to convey a specific gender.” 

The researchers continue, “More specifically, most of the chatbots have female names, female-looking avatars, and are described as female chatbots.”

The prominence of “female” chatbots encourages stereotypes of women as submissive and compliant. UCLA professor Safiya Noble said in 2018 that chatbots can “function as powerful socialization tools, and teach people, in particular children, about the role of women, girls, and people who are gendered female to respond on demand.”

Raj Sanghvi, CEO at BitCot, a prominent digital enablement and development firm, believes it is important for developers to create more gender neutral virtual assistants that do not reinforce gender bias via new technologies.

“A well designed compassionate Chatbot can stimulate a great conversation and can help solve and streamline several communication challenges,” says Sanghvi. 

“It is really important to understand what problem you are solving and define parameters, questions, choice of language, context. Chatbots can potentially take the customer experience to the next level.” 

When asked what was the most interesting aspect of working on chatbots, Sanghvi answered, “One cannot hide behind a screen or the user interface when working on a chatbot, which is good and bad. Therefore, the user experience is all about context, language and intelligence of the bot to help solve a problem and the evolution of the bot is a constant process as the bot learns more about the customer.”

Another expert we spoke to for this piece,  Eric Kades, Founder of, had another perspective on the interesting aspects of chatbots:

“What I find most interesting about chatbots is understanding how the design of each individual conversational interface has the ability to positively or negatively impact someone’s life. The journey of finding that right formula for customer success is always challenging, fun, and rewarding for me.”

A pioneer of AI-powered solutions, TextChat, in conjunction with UCLA, built their first custom chatbot in 2017 that helped low socioeconomic students through the financial aid process and significantly decreased the dropout rate.

Another expert chosen to consult for this piece is Molly Duggan, Chief Creative Officer and CEO of Molly Duggan Associates, an award-winning creative technology agency based in San Francisco. Duggan’s projects include some of the most significant science, technology, aerospace, environmental, higher education, health care, and entertainment marketing programs around the world.

Duggan brings a needed perspective to the conversation, as her agency has created chatbots for clients that exemplify the ideal manner by which to approach the issue of gender in software development. 

“About eight years ago, we started looking into using AI to analyze tone and sentiment in social media,” says Duggan. “It’s important because this can reveal how people feel about your brand through the online lens on a massive scale.”

In 2018 Duggan started working with AI or virtual assistants and created a service offering called Conversation Orchestrator — the following year (2019), they received an award for work in the Conversational Marketing space from Drift — after becoming a solutions partner that year. 

The single most exciting thing about the virtual assistant space, in Duggan’s opinion, is mapping human behavior to scripted conversations that deliver results for our customers. 

“You would think people would rather talk to a human,” said Duggan. Instead, Duggan found that when salespeople tried to prompt a conversation with, “Hi! I’m Sam. Let me know if you have any questions or you can call me at…”, the conversations never happened. 

However, suppose a brand prompts a visitor with a question around a particular pain point gleaned by context from the page they’re on. In that case, the customer typically will respond as if it’s a survey or to confirm, “Yes, I do have that problem.” 

“Once that happens,” says Duggan, “the conversation script starts, and if we do our job right, we can guide the customer to what they need and educate them along their journey. Our goal is to help them get what they need while creating a delightful experience.” 

But if not done correctly, this can go poorly. “If a chatbot using Natural Language Processing (NLP) and Machine Learning technology,” says Kades, “and keeps answering a simple question wrong over and over again the chatbot is leaving the customer in what I call, ‘wrong answer hell.’”

TextChat is the only chatbot platform on the market that allows business owners to take over a conversation from a chatbot in real-time via text message, which has shown to greatly increase sales conversion.

Kades continues, “It saddens me to think that chatbots get the bad name that they do, because technology companies sell their chatbot platforms to eliminate human interaction and save money. This causes chatbot bias, slows down chatbot adoption, and causes humans to waste a lot of time with chatbots that are built to be ‘roads to nowhere.’”  

Sanghvi believes it is the compassion of language (or the lack thereof) in chatbots that determines their likelihood of success. 

“Having diversity in computer science and engineering as well as the data analysts that help create data along with the context and compassion in the language would help reduce bias but this would be a work in progress topic for a long time.”

Sanghvi, a well-known supporter of women entrepreneurs and their innovations, says that discussions of gender are vital to creating socially beneficial AI. 

“Despite being less than a decade old, modern chat assistants require particular scrutiny due to widespread consumer adoption and a societal tendency to anthropomorphize these objects by assigning gender,” says Sanghvi. 

“Applying a gender to a bot that reinforces stereotypes is problematic, so we don’t take that approach,” says Duggan. 

“We script bots that are non-binary, and our scripted conversations don’t take on a particular gender. We name our bots with the partial names of the brand (i.e., Talkbot for Talkdesk or Clearbot for ClearSale).” 

On the other hand, as the CEO of a Creative Technology Agency, Duggan understands how important it is for the technologies we interact with, such as Intelligent Virtual Assistants, to be based on industry-wide guidelines for the humanization of AI and how gender is portrayed.

“In my opinion, applying gender to an Intelligent Virtual Assistant should be done consciously. There may be circumstances where gender is essential for an Intelligent Virtual Assistant,” says Duggan, “such as dealing with physical or mental health topics. In those circumstances, a person may feel more comfortable having a conversation with someone of similar gender identity as their own.” 

Molly Duggan Associates partners with organizations that are taking on some of the world’s most challenging problems—working with visionaries at organizations like Johns Hopkins, Berkeley Lab, UCLA, and UCSF

Higher education is an environment where awareness of inclusion is crucial, and these higher ed clients are proactive about addressing gender bias. 

Duggan elaborates: “As potentially the first brand touchpoint for a visitor, it can set the tone of the experience. A conscious approach to all bias, economic, language, education, race should be considered.”

So what can be done to address gender bias in chatbots? A good first step is more thoughtful chatbot design. “With thoughtful chatbot design, a chatbot will drive sales and customer satisfaction dramatically for an ecommerce business,” says Kades. 

He continues, “There should be more focus on designing chatbots where the goal is customer success, not just solving a business problem or cutting costs.I think software designers often forget that chatbots are communicating with real human beings who have real anxieties about making a purchase decision or need to solve their own personal problem.”

Duggan adds to this conversation, stating, “we see our job as focusing on the pain points of each visitor and providing conversational solutions that are both empathetic and playful without a slant towards any gender.” 

And of course, a large part of the problem and therefore the solution is data. 

All bots are first trained with the actual data. With a combination of ML models and tools built, developers match questions that customers ask and answers with the best suitable answer. 

“Bots use pattern matching to classify the text and produce a suitable response for the customers. A standard structure of these patterns is ‘Artificial Intelligence Markup Language’ or AIML,” says Sanghvi. 

Once the chatbot is ready and is live interacting with customers, smart feedback loops can be implemented. During the conversation when customers ask a question, chatbot smartly give them a couple of answers by providing different options like “Did you mean a,b or c”. 

That way customers themselves match the questions with actual possible intents and that information can be used to retrain the machine learning model, hence improving the chatbot’s accuracy.

To reduce bias in this technology it is essential to pay close attention to the data on both sides of the bot. “People write bots and also people use them so bias originates from both those aspects,” says Sanghvi.

Sanghvi importantly noted that “The NLP layer and knowledge base which is the source content and if that is biased generally the chatbot will continue to show the behavior. There are ways to reduce bias by allowing a review system and incorporating into the flow as well as improving the quality of source data.”

The  key is to involve data scientists in the process in order to mitigate risks and also to invest in research and multidisciplinary development teams that include profiles such as psychologists and linguists. 

To address gender portrayals in AI bots, developers must focus on diversifying their engineering teams; schools and governments must remove barriers to STEM education for underrepresented groups; industry-wide standards for gender in AI bots must be developed; and tech companies must increase transparency. Molly Duggan Associates is a great example of this. 

“We strive to create an empathetic automated experience that delights visitors who need a question answered or need to know that there’s a solution that fits them,” says Duggan. “Our overarching goal is to make higher ed accessible. And, to do that, we have to meet students where they are.”

She concludes, “Reducing bias in chatbots is another step toward creating a safe and inclusive educational environment for all. The bottom line, brilliance knows no gender.”

This blog is a continuation of the Building AI Leadership Brain Trust Blog Series which targets board directors and CEO’s to accelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results.

My last two blogs focused on the importance of AI professionals having some foundation in science discipline as a cornerstone for designing and developing AI models and production processes, and explored value of computing science, the richness of complexity sciences and the value of physics to appreciate the importance of integrating diverse disciplines into complex AI programs – key for successful returns on investments (ROI).

This blog discusses key AI and machine learning (ML) terms that every board director and CEO must know to stay relevant and advance their duty of care. If you want a good starter on the responsibility and duty of care, I recommend you read my earlier blog here.

In the Brain Trust Series, I have identified over 50 skills required to help evolve talent in organizations committed to advancing AI literacy. The last few blogs have been discussing the technical skills relevancy. To see the full AI Brain Trust Framework introduced in the first blog, reference here.

We are currently focused on the technical skills in the AI Brain Trust Framework advancing the key AI and machine learning terms.

Technical Skills:

1.    Research Methods Literacy

2.   Agile Methods Literacy

3.  User Centered Design Literacy

4.   Data Analytics Literacy

5.   Digital Literacy (Cloud, SaaS, Computers, etc.)

6.   Mathematics Literacy

7.   Statistics Literacy

8.  Sciences (Computing Science, Complexity Science, Physics) Literacy

9.   Artificial Intelligence (AI) and Machine Learning (ML) Literacy

10.Sustainability Literacy

Understanding Key AI Terms

The AI field is a deep and rich field that includes many fields, including statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics.

Hence I am only going to dip into three key basic concepts to answer: what is AI?, what is an algorithm?, and what is an AI Model? I will continue in the next two blogs to define other key AI concepts and definitions that I believe every CEO or Board Director must master at the basic AI proficiency levels. After all, how can you lead if you don’t know your basics in one of the most significant disruptors of our lifetime.

I always say to executives, it is never too late to learn and to stay relevant you have a business imperative to be sharper about digital transformation and AI is a cornerstone for not only countries to compete against, but for corporations to rethink their business models.

The first order of business is to ensure that you can define what is Artificial Intelligence? In the most simplistic terms, AI is the computer simulation of human intelligence in machines that is programmed to think like humans and mimic human actions.A typical AI analyzes its environment and takes actions that maximize its chance of success.

AI was first defined, by John McCarthy in 1956, when he held the first academic conference on the subject. Then five years later, Alan Turing wrote a paper on the notion of machines being able to simulate human beings and the ability to do intelligent things.

AI is not new – it’s just that the time is right now for AI everywhere, due to the proliferation of volumes of data, both structured and unstructured data, and more importantly the ability of computing processing power to crunch the data and produce the insights that were near to impossible to generate prior.

More information on AI history: Refer to: Gil Press, a senior Forbes contributor’s excellent summary of AI, so if you are history buff, recommend you read his blog here. You will find many definitions of AI, but distilling AI to is basic roots, recommend you read the additional more detailed definition here.

The second most important concept reg: AI is to understand is what is an algorithm?

An algorithm is a process or a set of rules to be followed in calculations or other problem-solving operations, especially by a computer. The Wikipedia defines an algorithm as “a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.” Whether you are aware of it or not, algorithms are increasingly ubiquitous – everywhere in our lives.

The goal of an algorithm is to solve a specific challenge or problem which is usually defined as a sequence of rules or steps. An algorithm tells a computer what to do next with an “and,” “or,” or “not” statements.

Algorithms provide the instructions for AI systems and without a set of algorithms AI cannot perform a function (outcome).

In terms of AI, we use the term machine learning to describe that an algorithm or series of algorithms perform a software function that enables software to update and automatically learn from without the need for a programmer. ML algo’s are fed into a data set to perform a specific task and solve a problem, without being programmed. There are literally hundreds of AI Algorithms, and this blog defines a number of the most popular types of clustering algorithms and is worth a read.

AI requires lots of data so it can find patterns and then builds predictions based on the data being analyzed.

The third key concept of AI to ensure you understand is What is an AI Model? AI/ML models are the mathematical algorithms that are “trained” using data and human expert input to replicate a decision an expert would make when provided that same information. Artificial intelligence is generally divided into two types of AI – narrow (or weak) AI and general AI, also known as AGI or strong AI.

Forbes contributor Tom Taulli wrote an excellent post on defining how to build an AI model which offers practical steps perspectives to give more depth to this point. See his writings here.

See this blog reference for additional information on basic AI terms, and some simple learning visualizations to make your AI learning easier and more fun.

Board Directors and CEOs ask to evaluate their depth of talent in artificial intelligence?

1.) How many resources do you have that have an undergraduate degree in Artificial Intelligence, or a masters or a Ph.D.?

2.) How many projects underway in your company are using internal AI resources vs external resources?

3.) Is the balance of your resourcing aligned to your strategic vision of modernizing your talent base?

4.) How many of the Board Directors or C-Suite have expertise in AI or Machine Learning disciplines?


I believe that board directors and CEOs need to appreciate AI fundamentals, and also ensure that they understand their talent depth in AI and machine learning disciplines, but as discussed through this series, there are many other skills and competencies required to thrive in using AI efficiently and effectively. Stay tuned for more helpful AI concepts simplified to increase your AI knowledge and vocabulary.

More Information:

To see the full AI Brain Trust Framework introduced in the first blog, reference here. 

To learn more about Artificial Intelligence, and the challenges, both positive and negative, refer to my new book, The AI Dilemma, to guide leaders foreword.


If you have any ideas, please do advise as I welcome your thoughts and perspectives.


, the leading search engine in China, has launched a robotaxi service in Shougang Park – an industrial park on the outskirts of Beijing which will be a site for the 2022 winter Olympics. Baidu claims the service is “fully driverless” and will be the first service in China where members of the public can hail a paid robotaxi of this sort.

The Baidu Apollo project is one of the most established such efforts in China. It recently reported over 10m kilometers of on-road testing in China, the first Asian company to reach that level. As we count the milestones to true deployment, without hard safety data, the best thing we can look at is how close a company is to the final goal. That final goal must check several boxes:

  1. It operates over a wide and financially viable service area
  2. The service is generally available to the public, or a specific commercially viable subset of the public
  3. It’s a real commercial operation, charging a competitive but profitable (at least on a COGS+ basis) fare
  4. There is no safety driver anywhere in the vehicle or any chase car
  5. Any remote operations center is just for occasional strategic advice, and there are significantly fewer (one quarter or less) remote operators than in-service vehicles
  6. Bonus points for particularly difficulty in driving the service area

Nobody is quite there. Waymo is closest, though their service area is currently limited and they have not disclosed their ratio of remote ops staff to vehicles. In addition only a portion of vehicles meet criteria #4. Starship (disclaimer, I am a stockholder) ticks all the boxes but does small deliveries, not passenger service.

AutoX was the second company to announce progress here, with a service in a quiet suburb of Shenzhen. A test ride of their service by a member of the press impressed me, showing the territory is considerably more complex than Waymo’s service area in Chandler Arizona, though not as complex as more urban parts of the USA or China.


in Russia has offered service with a safety driver in the passenger seat. This safety driver can still hit a kill switch and grab the wheel, so it’s an odd choice — the capability of the safety driver to do their job is impaired, so it’s mostly a show of confidence, a partway step to removing the driver. Others have also done this, or done non-public tests with no safety driver at all. WeRide recently demonstrated test operations in Guangzhou with no safety driver, so things are moving fast in China.

Baidu service

The Baidu service operates over a decent but small service area. It costs 30 RMB, but many rides can be made free by destinations. (It is unconfirmed if anybody is paying on day one.) The biggest missing point, however, is the presence of a safety employee in the passenger seat. Baidu states this employee is not there to do any driving, and is simply there to answer questions and help the passenger feel more comfortable. However, they can of course grab the wheel in an emergency, and may have a kill switch they can trigger.

In addition, Baidu has a remote operations center. All teams have those, but unlike most, operators in that center can actually remotely drive the vehicles, since Baidu can count on low-latency 5G networks in this region of Beijing. Baidu states remote operators are not constantly monitoring vehicles on a 1:1 ratio but did not yet respond to queries about the ratio of remote operators to cars. Most remote operations centers, with the goal of having a large ratio between cars and humans, have the remote operators only provide strategic advice the car can use to plan its way around something which was too confusing for the software. Cars typically stop and request a remote assist which does not drive. Having remote drive ability can resolve problems faster, which is good, but depends on very good networking. It is not a good idea to depend on it to solve problems in real time, while moving unless you have one operator per car, which defeats the point.

While Baidu is to be congratulated for having the confidence to take this step, this service does not come close to counting as “fully driverless.” It may be able to get there soon, however. It’s not as bad as how Tesla

calls their product “full self driving” and there is some bad hype inflation going on with certain industry players, who would probably be better served by underhyping rather than overhyping. Still, this project will teach them much as they deal with real customers. To paraphrase Sun Tzu, “no business plan survives first contact with the customer.”

AutoX’s service is closer, but it’s not clear how open it is to the general public. Some reports suggest members of the public must be approved, making it close to a wide beta. This is a fairly reasonable restriction, if for no other reason that most services have limited resources. In the future, robotaxi services which sell subscriptions will probably only sell to people who can make good but not excessive use of the service, though they will still probably take anybody on a pay-per-ride basis.

Rumors abound that Waymo will open in San Francisco or Silicon Valley soon. This would finish ticking their boxes. Waymo fares in Arizona are slightly less than Uber. Obviously nobody makes any profit during these pilot phases, but generally it should be possible to make a profit with prices close to Uber but no driver to pay as long as other costs stay in line.

Two AI luminaries, Fei-Fei Li and Andrew Ng got together today on YouTube, to discuss the state of AI in healthcare. Covid-19 has made healthcare a top priority for governments, businesses, and investors around the world and accelerated efforts to apply artificial intelligence to improving our health, from drug discovery to more efficient hospital operations to better diagnostics.

The first quarter of 2021 saw a new funding record with nearly $2.5 billion raised by startups focusing on AI in healthcare, according to CB Insights. But this could be similar to the excitement around and investment autonomous vehicles technologies a few years ago, as successful implementations of AI-based products and services in healthcare may not be just around the corner.

While both Li and Ng are currently applying AI to healthcare challenges, they believe that in the next few years they and their colleagues will still be in the experimentation stage. Progress will be “much slower than we wish over the next few years,” says Ng. “We are still figuring out the path to a human win,” agrees Li. For her, taking a “human-centered approach” is key to advancing the state of the art of AI in healthcare. She encourages her students to shadow clinicians in the hospital, “to see the human side,” to understand better both patients and the people taking care of them as the key to successful adoption of AI-based solutions. This is a unique challenge in the healthcare sector, according to Li, stressing the importance of the non-digitized aspect of healthcare, the human factor. “We have almost zero data on human behavior,” she says.

In addition, Ng advocates shifting AI development from being model-centric to being data-centric. This includes improving the quality of the data used to train AI programs and building the tools and processes required to put data at the center of developers’ work. The quality, privacy, and availability of data has, of course, its unique challenges in healthcare settings. Ng points out that quality-of-data standards are still ambiguous and, as a result, AI developers need to brainstorm all the things that can go wrong and analyze the data accordingly. Li thinks that the most important thing is to recognize human responsibility. “AI is biased” is a term that puts the responsibility on the machine rather than the people who collect and manage the data. For Li, putting guard rails against potential bias and ensuring data integrity is a first step in the design process.

In answering the question “What are the healthcare problems that are yet to be solved?” Ng mentions mental health, diagnostics, and the operational side of healthcare. Li cites the 250,00 people that die in the U.S. annually due to medical error. AI can help in ensuring that medical procedures are carried out correctly, and that chronic patients are cared for, at home or in the clinic, in a timely fashion. “This is what ambient intelligence is about,” says Li, to serve as physician- and nurse-assistants, to catch errors before they occur.

The observations made by Ng and Li are supported by recent surveys and studies, all pointing to the nascent state of AI in healthcare:

·       90% of U.S. hospitals have an AI/automation strategy in place, up from 53% percent in the third quarter of 2019. But only 7% of hospitals’ AI starategies are fully operations, according to Sage Growth Partners;

·       The number of approved AI/ML-based medical devices has increased substantially since 2015, but currently, “there is no specific regulatory pathway for AI/ML-based medical devices in the USA or Europe,” concluded a study publish in The Lancet;

·       Despite a $27 billion in federally funded incentive programs to encourage hospitals and providers to adopt Electronic Health Records, there is no standard format or centralized repository of patient medical data. “The Covid-19 pandemic has underscored this issue,” observes a CB Insights report;

·       physicians were susceptible to incorrect advice, whether the source is and AI system or other humans. “For high-risk settings like diagnostic decision making, such over-reliance on advice can be dangerous,” concludes an MIT study.

But, as Fei-Fei Li says, a barrier to adoption is also an opportunity. Both Li and Andrew Ng think expect a tipping point in the future, when a big success story will be rapidly replicated and encourage healthcare providers—and patients—to embrace healthcare AI.

Before the sun comes up over the rows of salad greens and cauliflower and other vegetables that blanket California’s farms, operators who have been trained to manage the hulking orange machines known Titan FT-35s load them up from the Salinas hub of Farmwise—a startup that offers robotic weeding as a service—and transport them via tractor-trailer to farms in California and Arizona. 

Rented at a per-acre cost, the geo-fenced robots drive along planted rows, capturing images of crops that get uploaded and run through a model trained to classify each image. Weeds get the chop. Vegetables remain. The more time a robot spends at a given farm, the better the AI gets, and so does the weeding. To date, FarmWise’s robots have imaged about 200 million individual crops and partnered with about a dozen of the largest vegetable farms in the U.S. 

Cofounders Sébastien Boyer and Thomas Palomares launched Farmwise in 2016 to tackle two major pain points for the farming industry: the increasingly unpopular use of pesticides, and persistent labor shortage. Robots, they believe, can solve this problem, especially for high-value crops–which tend to be labor intensive and expensive to produce—like the leafy greens for which the Salinas Valley (nicknamed “America’s Salad Bowl”) is known. Rather than using chemicals or changing to crops, like tree nuts, that are easier to grow, farms can hire FarmWise; when the robots are dispatched, they tend to eradicate 95% of weeds, Boyer says, allowing farms to grow the crops they want to grow, and to do so sustainably. No chemicals, no problem.

A return company on the Forbes AI 50 list for 2021, FarmWise raised a $5.7 million seed round in 2017; two years later, it followed that up with a $14.5 Series A led by Pasadena-based Palisades Ventures. The robots are now on their third generation: the more Boyer and Palomares learn by visiting farms and talking to farmers, the more they are able to refine and advance the technology that underlies their platform.

“We weren’t coming to them with anything to sell,” Boyer says of the farms with whom FarmWise has developed partnerships. “We were really coming to them with more questions than answers.” Their focus on weeding is the result of these interviews. 

“If you look at farming as a whole, three quarters of all the chemicals used are used to kill weeds,” Boyer tells Forbes. In the absence of herbicides—which are never used for organic crops—weeding is done by hand, which is a problem, because the U.S. labor force does not have the hands that producers need. In 2018, the USDA reported that American farms have found it increasingly difficult to hire and maintain workers; in a survey of 1,000 Californian farmers that came out the following year, 42% of respondents said they adapted to labor scarcity by reducing pruning or weeding, and 27% said they delayed those same processes. 

FarmWise’s cofounders, both born in France, met as classmates at the prestigious Ecole Polytechnique, in Palaiseau. Palomares had grown up helping his grandparents at their farm in a small town in the Alps. “They were making yogurt and cheese, like a lot of farms in France,” he says; either way, from a young age he intuited the physical demands that farm work placed on the human body, and the emotional effects wrought when that labor brought insufficient fruits. After graduation, Palomares went to Stanford, and Boyer to MIT, but they kept in touch and eventually decided to launch a company that would draw on their shared interest in technology, sustainability and agriculture. 

Recently, FarmWise has released a beta version of a new grower dashboard to a few customers, which will allow farmers better and more precise insights—like crop count, crop size, and spacing trends, for instance—into crop conditions. They are also hoping to expand the crop varieties on which the robots can work, which they believe the deep-learning their technology employs makes them uniquely positioned to do. 

“Competitors who develop automated weeders often rely on simple infrared technologies and human-defined parameters which fail at successfully handling the variety of use cases we have to deal with in the field,” Boyer explains. 

For Alain Pincot, a managing partner at Betteravia Farms, which sells its produce under the label Bonipak, getting involved with FarmWise was a no-brainer. “We are, I think, a very progressive organization,” he says. Like the Farmwise founders, Pincot is French—he came to California to study agribusiness at Santa Clara University—and says that, in terms of welcoming automation into the agricultural industry, the U.S. is a bit behind. 

“I think Europeans are ahead of us Americans,” Pincot says. “They have experienced much earlier than us constraints with labor, and cost of labor. As early as the mid 90s or early 2000s, they were already thinking about how they could reduce their cost. While in the U.S., we still had it—let’s face it—pretty good.” The USDA report affirms this latter part of Pincot’s analysis: in the latter half of the 20th century, the U.S. enjoyed an influx of inexpensive labor from Mexico, which has now, for a variety of reasons, declined. 

Boyer and Palomares believe that the technology they have already created will serve as the launchpad for future endeavors, including moving into vineyards, tree crops, and commodity crops  like corn, soybean and wheat. The company plans to expand into crop protection and fertilizing, in addition to its weeding services.

As it does so, Boyer says, the startup will keep an eye on creating jobs, not just automating them. “The impact that we have on jobs is to create a new type of farming job,” one without grueling manual labor. These new jobs, Boyer believes, will be not only better paid, but also “more interesting.” 

“These are the jobs of tomorrow for the farming industry,” he says. “That will create a totally new type of workforce.”

The impact of the COVID-19-driven crisis is far more significant than any other crisis in recent history. Besides the health impact, it is fundamentally changing the way human beings interact and conduct business. The businesses that are aligned with the new norm have seen an economic acceleration and vice-versa. Hence, COVID-19’s impact on the economy has been bimodal. On the one hand, some sectors like the small-and-medium business sector have taken a big hit, while others like the technology sector are benefiting from the adoption of Cloud, Automation, and Productivity enhancement tools.

Analyzing the past inflection points such as recent economic recessions or depressions, we notice that many famous companies emerged during these challenging times. Uber (2009), WhatsApp (2009), AirBnB (2007), MailChimp (2001), EA (1982), Burger King (1954), and Hewlett-Packard (1939) are some of the famous examples in this regard. Along the same lines, the coronavirus pandemic is acting as a rare inflection point in history to create an unprecedented opportunity to foster entrepreneurship. In fact, entrepreneurs are jumping on the opportunity to start new businesses during the pandemic. Using the data from the Census Bureau, Peterson Institute for International Economics released a report saying that startup business activity grew by 24 percent in the United States last year. Business startups grew from 3.5 million in 2019 to 4.4 million in 2020 in the United States. This contrasts to the declining rate of business formation prevalent in the United States for the last few decades. This boom in entrepreneurship is at a national level, not just in hotspots like Silicon Valley. The definition of entrepreneurship is broader than just technology startups that have mushroomed in Silicon Valley in the past decade. Like the United States, the number of new businesses has increased in Chile, Turkey, and the United Kingdom. 

Let’s analyze the catalysts behind such a boom in entrepreneurship during the current pandemic. One likely factor is the massive support from the US Federal and State Governments for new business creation. The other factor can be the availability of highly skilled human resources. For a business to flourish, you need capable people who also have patience, persistence, and perseverance. Amid the coronavirus crisis, the unemployment rate reached an unprecedented level of 14.8% in April 2020 in the US – the highest since data collection started in 1948. This may be acting as a positive contributing factor for building a new business during the pandemic. As big companies lay off their employees, or existing businesses are forced to shut down, it brings top and seasoned talent into the market, available for grab by new companies. A deeper aspect of human capital available in such times is that a person who practices a business idea in a hard economic environment is a true entrepreneur at heart. He or she is likely to be highly dogmatic and has a deeper understanding of entrepreneurship traits than someone who starts a business during an economic boom. So, it is worth arguing that entrepreneurs who create startups during a pandemic are the best version of entrepreneurs. In fact, for some entrepreneurs, starting a new business may be the only way to survive. So, they move mountains to make the business work.

In addition to human capital, another key ingredient for entrepreneurial success is access to the right market. COVID-19 crisis has opened up doors to new market opportunities and has stimulated big changes in the existing markets. While this has created destruction on one side, it has resulted in the creation or upsizing of markets. Such new markets are ready to adopt solutions, hence providing the much-needed product-market fit to the solutions created by entrepreneurs.

So what kind of company can we expect in terms of the next big thing emerging out of this pandemic? Will it be another Amazon or Google, or should we brace ourselves for something entirely new? Perhaps a new form of digital and Artificial Intelligence(AI)-based disruption? Time will have the final verdict. But since automation and digitization have been two of the top trends in the COVID-19 era, we can assume that the next-big-thing being churned out will leverage these as pillars. The core enabling technologies for such a company can be Cloud and AI. In fact, such a company will likely be AI-first from the start to embed AI in virtually every aspect of the product or the service. From monitoring fitness to self-driving cars and shopping at retail locations, the applications of AI-first could be immense and unprecedented. So, let’s fasten our seatbelts and watch the rise of a new AI giant that will hopefully uplift a generation.

This blog is a continuation of the Building AI Leadership Brain Trust Blog Series which targets board directors and CEO’s to accelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results.

My last few blogs introduced the theme and value of Science and stressed its importance to AI, and focused on the importance of AI professionals having some foundation in computing science as a cornerstone for designing and developing AI models and production processes. The Science blog series focused on three relevant disciplines to AI of 1.) computer science, 2.) complexity science and 3.) physics.

This blog introduces the importance of the field of complexity science and its relationship to AI competencies.

In the Brain Trust Series, I have identified over 50 skills required to help evolve talent in organizations committed to advancing AI literacy. The last few blogs have been discussing the technical skills relevancy. To see the full AI Brain Trust Framework introduced in the first blog, reference here.

We are currently focused on the technical skills in the AI Brain Trust Framework

Technical Skills:

1.    Research Methods Literacy

2.   Agile Methods Literacy

3.  User Centered Design Literacy

4.   Data Analytics Literacy

5.   Digital Literacy (Cloud, SaaS, Computers, etc.)

6.   Mathematics Literacy

7.   Statistics Literacy

8.  Sciences (Computing Science, Complexity Science, Physics) Literacy

9.   Artificial Intelligence (AI) and Machine Learning (ML) Literacy

10.Sustainability Literacy

What is the relevance of complexity science to AI as a discipline?

You do not see enough mention of complexity science in relationship to AI competency development. This really surprises me as complexity science is all about the traversing of disciplinary boundaries and occurs both within and between multiple systems. Complexity sciences have emerged through interdependent and overlapping influences from diverse fields, including concepts from: physics, economics, biology, sociology and computer science.

Complexity sciences strive to understand relevant “system” phenomenon that is characterized by changes, and unpredictability. A “system” is a set of connected or interdependent things or agents (such as a person, a molecule, a species, or an organization). Both systems theory and complexity science focus on the relationships between these elements rather than on each element alone within the system.

One of the important attributes of complexity science is that it produces emergence. Emergence is of particular importance in relationship to innovation where one needs to be patient and appreciate iterative inquiry and persistency. Based on everything I have learned over the past ten years in designing and developing AI models for diverse enterprise use cases from predicting forecasting outcomes on sales data sets to predicting customer call center churn rates to even reviewing AI models that predict which customers in online gaming industries will have the propensity to become a VIP customer, there is always one constant reality.

AI projects that are good never end. They are rooted in complexity sciences simply given so many variables are in play from complex data sets to continual refreshing and augmenting data sets to adding in new methods to increase the predictive accuracy or constantly ensuring the business users are applying the insights and actually advancing new outcomes versus AI models standing like soldiers in isolation of human decision making.

Emergence not only happens in innovation processes but emergence in particular occurs when random events combine to produce outcomes that have observable patterns but are unpredictable and are difficult to reproduce. Examples of unpredictable events could well be Covid-19’s mutating variants, stock market valuations, viral videos, AI deep learning models etc.

Complexity science has been a transformational element in the physical and biological sciences since the 1970’s. However, it has only been in the last decade that the relevance of complexity science to business has begun to be appreciated in full.

The applications to business and to business thinking is profound, and in many ways counterintuitive. As AI starts to revolutionize business processes and business thinking, it increases the value that the knowledge of complexity science and its implications are all the more important. 

An example of complexity in AI is that neural networks are a representation of a complex and dynamic system. AI neural networks leverage diverse disciplines from non-linear dynamics to solid state physics, and human brain physiology, and even parallel computing disciplines.

One of the increasing developments underway bringing AI and complexity sciences together is the understanding that we live in a highly creative and unpredictable world where the system (the world) we live in must adapt to unforeseeable changes that represent new possibilities or new opportunities. Appreciating more holistically the dynamics and complexities of all the factors around AI enablements is a key area for evaluating AI maturity in businesses, resulting in more holistic approaches and better risk management practices.

Hence, ensuring that companies value disciplines like complexity science and holistic system thinking operating frameworks will improve the appreciation of AI and also enable our world to adapt to a world with AI.

What key questions can Board Directors and CEOs ask to evaluate their depth of complexity skills linkages to artificial intelligence relevance?

1.) How many resources do you have that have an undergraduate degree in complexity sciences versus a masters degree or a doctoral degree?

2.) Of these total resources trained in complexity sciences disciplines, how many also have a specialization in Artificial Intelligence?

3.) How many of your most significant AI projects have expertise in complexity science and adaptive systems thinking to ensure holistic thinking in managing complex systems?

4.) How many of the Board Directors or C-Suite have expertise in complexity sciences or adaptive systems to support AI innovations and value emergence practices?

These are some starting questions above to help guide leaders to understand their talent mix in appreciating the value of complexity science disciplines to augment the specializations in artificial intelligence or data sciences in enterprise advanced analytics functions.


Board directors and CEOs need to understand their talent depth in complexity sciences to ensure that their AI programs are optimized more for success. Ensuring talent in AI has diverse disciplines is key to ensuring AI investments are successful, and continued investments are made to help them evolve and achieve the value to support humans in augmenting their decision making, or improving their operating processes.

The last blog in the three blog science literacy series will further extend the AI Brain Trust Framework, and explore some of the foundations of physics relevant to artificial intelligence

More Information:

To see the full AI Brain Trust Framework introduced in the first blog, reference here. 

To learn more about Artificial Intelligence, and the challenges, both positive and negative, refer to The AI Dilemma, to guide leaders foreward.


If you have any ideas, please do advise as I welcome your thoughts and perspectives.

Our lives are filled with explanations.

You go to see your primary physician due to a rather sore shoulder. The doctor tells you to rest your arm and avoid any heavy lifting. In addition, a prescription is given. You immediately wonder why you would need to take medication and also are undoubtedly interested in knowing what the medical diagnosis and overall prognosis are.

So, you ask for an explanation.

In a sense, you have just opened a bit of Pandora’s box, at least with regards to the nature of the explanation that you might get. For example, the medical doctor could rattle a lengthy and jargon-filled indication of shoulder anatomy and dive deeply into the chemical properties of the medication that has been prescribed. That’s probably not the explanation you were seeking.

It used to be that physicians did not expect patients to ask for explanations. Whatever was said by the doctor was considered somewhat sacrosanct. The very nerve of asking for an explanation was tantamount to questioning the veracity of a revered medical opinion. Some doctors would gruffly tell you to simply do as they have instructed (no questions permitted) or might utter something rather insipid like your shoulder needs help and this is the best course of action. Period, end of story.

Nowadays, medical doctors are aware of the need for viable explanations. There is specialized “bedside” training that takes place in medical schools. Hospitals have their own in-house courses. Upcoming medical doctors are graded on how they interact with patients. And so on.

Though that certainly has opened the door toward improved interaction with patients, it does not necessarily completely solve the explanations issue.

Knowing how to best provide an explanation is both art and science. You need to consider that there is the explainer that will be providing the explanation, and there is a person that will be the recipient of the explanation.

Explanations come in all shapes and sizes.

A person seeking an explanation might have in mind that they want a fully elaborated explanation, containing all available bells and whistles. The person giving the explanation might in their mind be thinking that the appropriate explanation is short and sweet. There you have it, an explanation mismatch brewing right before our eyes.

The explainer might do a crisp explanation and be happily satisfied with their explanation. Meanwhile, the person receiving the explanation is entirely dissatisfied. At this point, the person that received the explanation could potentially grit their teeth and just figure that this is all they are going to get. They might silently walk away and be darned upset, opting to not try and fight city hall, as it were, and merely accede to the minimal explanation proffered.

Perhaps the person receiving the explanation decides they would like to get a more elaborated version. They might stand their ground and ask for a more in-depth explanation. Now we need to consider what the explainer is going to do. The explainer might believe that the explanation was more than sufficient, and see no need to provide any additional articulation.

The explainer might be confused about why the initial explanation was not acceptable. Maybe the person receiving the explanation wasn’t listening or had failed to grasp the meaning of the words spoken. At this juncture, the explainer might therefore decide to repeat the same explanation that was just given and do so to ensure that the person receiving the original explanation really understood what was said.

You can likely anticipate that this is about to spiral out of control.

The person that is receiving this “elaborated” explanation is bound to ascertain that it is merely the same explanation and has been repeated, nearly verbatim. That’s insulting! The person receiving the explanation now believes they are being belittled by the explainer. Either this person will hold their own tongue and give up trying to get an explanation, or try hurtling insults about how absurd an explanation the explanation was.

It can devolve into a messy affair, that’s for sure.

The overarching point is that there is a delicate dance between the explainer and the providing of an explanation, along with the receiver and the desired nature of an explanation.

We usually take for granted these aspects. In other words, you rarely see an explainer ask what kind of explanation someone wants to have. Instead, the explainer launches into whatever semblance of an explanation that they assume the person would find useful. Rushing into providing an explanation can have its benefits, though it can also start an unsightly verbal avalanche that is going to take down both the explainer and the person receiving the explanation.

Some suggest that the explainer ought to start by inquiring about the type of explanation that the other person is seeking. This might include asking what kind of background the other person has, such as in the case of a medical diagnosis whether the other person is familiar with medical terminology and the field of medicine. There might also be a gentle inquiry as to whether the explanation should be done in one large fell swoop or possibly divided into bite-sized pieces. Etc.

The difficulty with that kind of pre-game formation is that sometimes the receiver doesn’t want to go through that gauntlet. They just want an explanation (or so they say). Trying to do a preamble is likely to irritate that receiver and they will feel as though the explanation is being purposely delayed. This could even smack of hiding of the facts or some other nefarious basis for delaying the explanation.

All told, we usually expect to get an explanation when we ask for one, and not have to go through a vast checklist beforehand.

Another twist to all of this entails the interactive dialogue that can occur during explanations.

The manner of explanations is not necessarily done in a one-breath fashion from start to end. Instead, it is more likely that during the explanation, the receiver will interrupt and ask for clarification or have questions that arise. This is certainly a sensible aspect. If the explanation is going awry, why have it go on and on, wherein instead the receiver can hopefully tailor or reshape the direction and style of the explanation.

For example, suppose that you are a medical professional and have gone to see a medical doctor about your sore shoulder. Imagine that the doctor doing the diagnosis does not realize that the patient is a fellow medical specialist. In that case, the explanation offered is likely to be aimed at a presumed non-medical knowledge base and proceed in potentially simplistic ways (with respect to medical advice). The person receiving the explanation would undoubtedly interrupt and clarify that they know about medicine and the explanation should rightfully be readjusted accordingly.

You might be tempted to believe that explanations can be rated as being either good or bad. Though you could take such a perspective, the general notion is that their beauty is in the eye of the beholder. One person’s favored explanation might be a disastrous or terrible one for someone else. That being said, there is still a modicum of a basis for assessing explanations and comparing them to each other.

We can add a twist on that twist.

Suppose you receive an explanation and believe it to be a good one.

Later on, you learn something else regarding the matter and realize that the explanation was perhaps incomplete. Worse still, it could be that the explanation was intentionally warped to give you a false impression of a given situation. In short, an explanation can be used to purposely create falsehoods.

That’s why getting an explanation is replete with problems. We oftentimes assume that if we ask for an explanation, and if it seems plausible, this attests that the matter is well-settled and above board. The thing is, an explanation can be distorted, either by design or by happenstance and lead us into a false sense of veracity or truthfulness at hand.

Another angle to explanations deals with asking for an explanation versus being given an explanation when it has not been requested. An explainer might give you an explanation outright because they assume you want one, whereas maybe you possibly don’t want and nor care for getting an explanation and are satisfied to just continue on. At that point, if you disrupt the explanation, the explainer might be taken aback.

More twisting turns can arise.

Why all this talk about explanations?

Because of AI.

The increasing use of Artificial Intelligence (AI) in everyday computer systems is taking us down a path whereby the computer makes choices and we the humans have to live with those decisions. If you apply for a home loan, and an AI-based algorithm turns you down, the odds are that all you’ll know is that you did not get the loan. You won’t have any idea about why you were denied the loan.

Presumably, had you consulted with a human that was doing the loan granting, you might have been able to ask them to explain why you got turned down.

Note that this is not always the case, and it could be that the human would not be willing or able to explain the matter. The loan granting person might shrug their shoulders and say they have no idea why you were turned down, or they might tell you that company policy precludes them from giving you an explanation.

Ergo, I am not suggesting that just because a human is in the loop you will necessarily get an explanation. Plus, as repeatedly emphasized earlier, the explanation might be rather feeble and altogether useless. For example, the loan granting person might tell you that you were denied because you were turned down for the loan.

Say what?

Yes, that is a non-explanation explanation under the wink-wink fakery cloak of being an explanation.

In any case, there is a big hullabaloo these days that AI systems ought to be programmed to provide explanations for whatever they are undertaking.

This is known as Explainable AI (XAI).

XAI is growing quickly as an area of keen interest. People using AI systems are going to likely expect and somewhat demand that they get an explanation provided to them. Since the number of AI systems is rapidly growing, there is going to be a huge appetite for having a machine-produced explanation about what the AI has done or is doing.

The rub is that oftentimes the AI is arcane and not readily amenable to generating an explanation.

Take as an example the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching algorithms that examine data and try to ferret out mathematical patterns. Sometimes the inner computational aspects are complex and do not lend themselves to being explained in any everyday human-comprehensible and logic-based way.

This means that the inherent design of the AI is by its structure not intrinsically set up for providing explanations. In that case, there are usually attempts to add on an XAI component. This XAI either probes into the AI and tries to ferret out what took place, or it sits aside from the AI and has been preprogrammed to provide explanations based on what is assumed has occurred within the mathematically enigmatic mechanisms.

Some assert that you ought to build the XAI into the core of whatever AI is being devised. Thus, rather than bolting onto the AI some afterthought about producing explanations, the design of the AI from the ground-up should encompass a proclivity to produce explanations.

Amidst all of that technological pondering, there are the other aspects of what constitutes an explanation. If you revisit my earlier comments about how explanations tend to work, and the variability depending upon the explainer and the person receiving the explanation, you can readily see how difficult it might be to programmatically produce explanations.

The cheapest way to go involves merely having pre-canned explanations. A loan granting system might have been set up with five explanations for why a loan was denied. Upon your getting turned down for the loan, you get shown one of those five explanations. There is no interaction. There is no particular semblance that the explanation is fitting or suitable to you in particular.

Those are the pittance explanations.

A more robust and respectable XAI capability would consist of generating explanations on the fly, in real-time, and do so based on the particular situation at hand. In addition, the XAI would try to ascertain what flavor or style of explanation would be suitable for the person receiving the explanation.

And this explainer feature ought to allow for fluent interaction with the person getting the explanation. The receiver should be able to interrupt the explanation, getting the explainer or XAI to shift to other aspects or reshape the explanation based on what the person indicates.

Of course, those are the same types of considerations that human explainers should also take into account. This brings up the fact that doing really good XAI is harder than it might seem. In a manner of speaking, you are likely to need to use AI within the XAI in order to be able to simulate or mimic what a human explainer is supposed to be able to do (though, as we know, not all humans are adept at giving explanations).

Shifting gears, you might be wondering what areas or applications could especially make use of XAI.

One such field of endeavor entails Autonomous Vehicles (AVs). We are gradually going to have autonomous forms of mobility, striving toward a mobility-for-all mantra. There will be self-driving cars, self-driving trucks, self-driving motorcycles (yes, that’s the case, see my coverage at this link here), self-driving submersibles, self-driving drones, self-driving planes, and the rest.

You might at first thought be puzzled as to why AVs might need XAI. We can use self-driving cars to showcase how XAI is going to be a vital element for AVs.

The question is this: In what way will Explainable AI (XAI) be important to the advent of AVs and as showcased via the emergence of self-driving cars?

Let’s clarify what I mean by self-driving cars, and then we can jump further into the XAI AV discussion.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And XAI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can (see my explanation at this link here).

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

Now that we’ve laid the stage appropriately, time to dive into the myriad of aspects that come to play on this topic about XAI.

I promised that we would use self-driving cars as our showcase for XAI.

First, be aware that many of the existing self-driving car tryouts have very little if any semblance of XAI in them. The initial belief was that people would get into a self-driving car, provide their destination, and be silently whisked to that locale. There would be no need for interaction with the AI driving system. There would ergo be no need for an explanation or XAI capability.

We can revisit that assumption by considering what happens when you use ridesharing and have a human driver at the wheel.

There are certainly instances wherein you get into an Uber or Lyft vehicle and there is stony silence for the entirety of the trip. You’ve likely already provided the destination via the ride-request app. The person driving is intently doing the driving and ostensibly going to that destination. No need to chat. You can play video games on your smartphone and act as though there isn’t another human in the vehicle.

That’s perfectly fine.

Imagine though that during the driving journey, all of a sudden, the driver decides to go a route that you find unexpected or unusual. You might ask the driver why there is a change in the otherwise normal path to the destination. They would hopefully prompt an explanation from the human driver.

There you go, I just said it, the notion of an explanation has come up in the context of driving a car.

Indeed, there are lots of situations in which a passenger is likely to seek an explanation from a driver.

We’ll cover a few of those in a moment.

Now, in this ridesharing example, it could be that the human driver gives you no explanation or provides a flimsy explanation. Humans do that. In theory, a properly done XAI will provide an on-target explanation, though this can be challenging to do. Maybe the human driver tells you that there is construction taking place on the main highway, and to avoid a lengthy delay, an alternative course is being undertaken.

You might be satisfied with that explanation. On the other hand, perhaps you live in the area and are curious about the nature of the construction taking place. Thus, you ask the driver for further details about the construction. In a sense, you are interacting with an explainer and seeking additional nuances or facets about the explanation that was being provided.

Okay, put on your self-driving car thinking-cap and consider what a passenger might want from an XAI.

A self-driving car is taking you to your home.

The normal path that would be used is unexpectedly diverted from the AI driving system. You are likely to want to ask the AI why the driving journey is altering from your expected traversal. Many of the existing tryouts of self-driving cars would not have any direct means of having the AI explain this matter, and instead, you would need to connect with a remote agent of the fleet operator that oversees the self-driving cars.

In essence, rather than building the XAI, the matter is shunted over to a remote human to explain what is going on. This is something that won’t be especially scalable. In other words, once there are hundreds of thousands of self-driving cars on our roadways, the idea of having the riders always needing to contact a remote agent for the simplest of questions is going to be a huge labor cost and a logistics nightmare.

There ought to be a frontline XAI that exists with the AI driving system.

Assume that a Natural Language Processing (NLP) interface is coupled with the AI driving system, akin to the likes of Alexa or Siri. The passenger interacts with the NLP and can discuss common actions such as asking to change the destination midstream, or ask to swing through a fast-food eatery drive-thru, and so on.

In addition, the passenger can ask for explanations.

Suppose the AI driving system has to suddenly hit the brakes. The rider in the self-driving car might have been watching an especially fascinating cat video and not be aware of the roadway circumstances. After getting bounced around due to the harsh braking action, the passenger might stridently and anxiously ask why the AI driving system made such a sudden and abrasive driving action.

You would want the AI to immediately provide such an explanation. If the only possible way to get an explanation involved seeking a remote agent, envision what that might be like. There you are, inside the self-driving car, and it has just taken radical action, but you have no idea why it did so. You have to press a button or somehow activate a call to a remote agent. This might take a few moments to engage.

Once the remote agent is available (assuming that one is readily available), they might begin the dialogue with a usual canned speech, such as welcome to the greatest of all self-driving cars. You, meanwhile, have been sitting inside this self-driving car, which is still merrily driving along, and yet you have no clue why it out-of-the-blue hit the brakes.

The point here is that by the time you engage in a discussion with the human remote operator, a lot of time and driving aspects could have occurred. During that delay, you are puzzled, concerned, and worried about what the AI driving system might crazily do next.

If there was an XAI, perhaps you would have been able to ask the XAI what just happened. The XAI might instantly explain that there was a dog on the sidewalk that was running toward the self-driving car and appeared to be getting within striking distance. The AI driving system opted to do a fast braking action. The dog got the idea and safely scampered away.

A timely explanation, and one that then gives the passenger solace and relief, allowing them to settle back into their seat and watch more of those videos about frisky kittens and adorable puppies.


There are lots and lots of situations that can arise when riding in a car and for which you might desire an explanation. The car is suddenly brought to a halt. The car takes a curve rather strongly. The car veers into an adjacent lane without a comfortable margin of error. The car takes a road that you weren’t expecting to be on. Seemingly endless possibilities exist.

In that case, if indeed XAI is notably handy for self-driving cars, you might be wondering why it isn’t especially in place already.

Well, admittedly, for those AI developers under intense pressures to devise AI that can drive a car from point A to point B, doing so safely, the aspect of providing machine-generated explanations is pretty low on their priority list. They would fervently argue that it is a so-called edge or corner case. It can be gotten to when the sunshine of having achieved sufficiently self-driving cars has been achieved.

Humans that are riding in AVs of all kinds are going to want to have explanations. A cost-effective and immediately available means of providing explanations entails the embodiment of XAI into the AI systems that are doing the autonomous piloting.

One supposes that if you are inside a self-driving car and it is urgently doing some acrobatic driving maneuver, you might be hesitant to ask what is going on, in the same manner, that you might worry that you would be distracting a human driver that was doing something wild at the wheel.

Presumably, a well-devised XAI won’t be taxing on the AI driving system and thus you are free to engage in a lengthy dialogue with the XAI. In fact, the likeliest question that self-driving cars are going to get is how does the AI driving system function. The XAI ought to be readied to cope with that kind of question.

The one thing we probably should not expect XAI to handle will be those questions that are afield of the driving chore. For example, asking the XAI to explain the meaning of life is something that could be argued as out-of-bounds and above the paygrade of the AI.

At least until the day that AI does become sentient, then you can certainly ask away.