Fatal Tesla Victim & Apple Engineer Walter Huang’s Survivors
#53 After losing their brother, father & husband Walter to an A.I.-related accident, Huang's survivors are suing for wrongful death
Notice: This post was written without the assistance of a ChatGPT or any other text-generating algorithm.
Introduction: Buzzfeed News reached out to CBS about exaggerated A.I. capabilities they reported in their 60 Minutes newscast last month but got no response. This newsletter is about the COVID infodemic, so today I’m going to talk about disinformation about a technology that itself manufactures disinformation at scale. That technology is A.I., and the planned successor is “A.G.I.”
Apr 19 Researchers Accused Google And 60 Minutes Of Spreading AI Disinformation | AI is not some mysterious, magical, autonomous being, one critic said. (Buzzfeed News) - The far extreme of today’s A.I. hobby theorists hope the technology will spring to life into something they call “A.G.I.” but there is no proof-of-concept yet, in spite of a 60 Minutes broadcast implying a system spontaneously taught itself “all of Bengali” without any exposure to the language. “Margaret Mitchell, a researcher and ethicist at AI startup Hugging Face who formerly co-led Google’s AI ethics team, pointed out that PaLM, is, in fact, trained on Bengali according to a paper published by Google’s own researchers. The paper says1 that Bengali made up 0.026% of PaLMs training data.”
**But first, here is a brief COVID roundup: while humans have more immunity than they did in 2020, COVID — a virus that shut down far more economic activity than AIDS ever did — is still circulating, infecting, sickening, maiming and killing thousands of Americans weekly. The W.H.O. announced today that COVID isn’t over but the policy declaration “Global Pandemic Emergency” will end. In the U.S., that the national Public Health Emergency (PHE) will expire May 11th, does not mean the virus is gone, it just means some emergency suspensions of fraud liability, among other things, will conclude. **
So I want to shift to talk about how A.I. will affect information about COVID & other infectious diseases, and here’s why. As Reporters Without Borders wrote in their annual report published this week1, our information command posts — news outlets — and information flows are showing significant strain over the last decade which was dominated by social media platforms:
Reporters Without Borders director Christoph Deloire called this year’s global press health index “volatile” which he said is “the consequence of growth in the fake content industry, which produces and distributes disinformation and provides the tools for manufacturing it.”
Also, NewsGuard just reported on “49 news and information sites that appear to be almost entirely written by artificial intelligence software.” Their introduction added “a new generation of content farms is on the way2.” The whole report is worth reading.
Update 4/30/2024: The coupon at bottom has expired, but you can still join at the founder level which now comes with limited-edition swag. You could also join at the monthly level, then immediately turn off auto-renew. Or you subscribe and leave auto-renew on. Three options are available by clicking on this expired button:
To develop my points on A.I. — a disinformation mega-generator — and disinformation *about* A.I., I’m introducing Walter Huang’s wife, children and brother for two reasons. Reason one is the plight of Huang’s survivors and their lawsuit against Elon Musk illustrate something important about A.I. itself, in that it muddles chain-of-command between human and machine. This fogs up jury deliberations over how we apportion accountability in civil society when things go wrong and people get hurt. Reason two is Huang’s story carries a lesson on the ways high-profile news outlets, whose reporters spend too much time inside internet platforms run by CEOs who want only good news about A.I. to circulate, grow blind to the tactics powerful tech tycoons employ to bury important stories the public needs to know.
Our currency, the dollar in the case of the U.S., is backed by trust. If we don’t get A.I. rules properly developed, our economy is only going to go more haywire than it already has due to COVID.
I Was a Recent Tesla Fan
I begged one of my best friends to buy a Tesla in 2017, but my position turned since then. I’m not alone. Ex-tech reporter Molly Wood returned her Tesla in 2022 and exchanged it for a Polestar Electric Roadster for several reasons. Tesla’s inaccurate ghost-breaking while she was traveling on a (luckily) near-empty highway at high speeds sowed some doubts. Wood described other mishaps she experienced while owning and operating this semi-autonomous vehicle to fellow ex-tech reporter Kara Swisher on the Swisher’s Pivot podcast3. Another reason Wood took back the car is her son is Jewish, and she wasn’t as eager to drive him to school in her Tesla after CEO Elon Musk did this:
Feb 17 2022 Elon Musk tweets, then deletes, meme comparing Trudeau to Hitler (Reuters) - “Elon Musk compared Canadian Prime Minister Justin Trudeau with Adolf Hitler in a tweet that appeared to support truckers protesting vaccine mandates -- and which immediately triggered a storm on Twitter.”
I’m convinced Musk transformed himself into a right-wing troll to convert fossil fuel loyalists into electric car fans. That strategy is bringing him mixed success, but success nonetheless. Tesla’s sales are slightly up, globally4.
On April 3 of this year, just before Musk did something that sent Twitter-frequenting journalists talking about his antics, he was in court, as he often is, though that is sparsely reported. And when Musk is in court with a jury to sway, he often wins, even when it seems close.
The following Reuters story about Musk’s court appearance didn’t go viral on social media or the internet. But the Time magazine story about Musk’s antics – this time he replaced the Twitter bird graphic with that of a dog, a doge dog which his cryptocurrency is named after – was reported in several outlets. No court appearance is mentioned in the Time article:
Apr 03 Elon Musk seeks to end $258 billion Dogecoin lawsuit (Reuters) - “Elon Musk asked a U.S. judge on Friday to throw out a $258 billion racketeering lawsuit accusing him of running a pyramid scheme to support the cryptocurrency Dogecoin.”
Apr 04 Dogecoin Jumps After Shiba Inu Replaces Twitter Bird Logo (Time) - “Dogecoin rose as much as 31% after Twitter users noticed their home buttons changed into the dog meme after which the cryptocurrency is named.”
Apr 05 T-ITTER: Elon Musk Appears to Cover W on Twitter Sign Days After Changing Logo to Doge Meme (Mediaite) - “Photos posted by Twitter user @softwarejameson showed that the letter was covered on both sides of the sign, leaving the word T itter proudly displayed on the building.”
Having successfully buried unfavorable news about lawsuits against him in early April, Musk kept up the attention-getting stunts through winter into spring. Reports of his antics shrouded more news about Musk’s court cases, including a development in the lawsuit brought by Walter Huang’s family. Musk got a surprisingly favorable ruling from the judge on this case in February. But then his team pushed their luck to the extreme, which prompted this rebuke:
Apr 27 Elon Musk Ordered To Be Deposed in Trial Over 2018 Tesla Autopilot Crash in Mountain View (SFist) - "Tesla attorneys have argued that Elon Musk can’t be responsible for video evidence of him touting Full Self-Driving mode because the videos might be deepfakes, so the judge in the trial of a man killed in a 2018 Tesla crash is ordering Musk to be deposed himself."
“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” the judge wrote.
The six-year-old video of Elon Musk exaggerating Tesla’s full self-driving capabilities at a tech conference4 is hosted on news outlet Recode's YouTube channel.
Musk’s legal team’s position is that because Mr. Musk is famous, his public statements are now immune in the age of widely available A.I. generators. Interesting.
So who is Apple Engineer Walter Huang? Huang was a 38-year-old father, husband and brother when, on his way to work one day, his Tesla sharply veered left and crashed into a Mountain View, CA freeway barrier and burst into flames. Huang had noticed seven to eight times the car jerking to the left in the same spot on his morning commute, his brother told ABC affiliate KGO back in 20185, but the Tesla dealer told Huang they didn’t see a problem since they couldn’t duplicate it. So he resumed relying on the company’s ambiguity and Musk’s public assurances that Tesla’s autonomous driving A.I. was “safer than a person” driver, which is what Musk told a Recode conference audience in 2016 as shown in that video his team is trying to get dismissed5.
Chain of Command = Chain of Responsibility = Chain of Accountability
When you talk to aircraft pilots about chain-of-command between human and machine and contrast them with Tesla’s instructions for landcraft pilots, Musk’s comments from that Recode video raise flags.
The movie “Sully” based on the true story of Captain Sullenberger, the co-pilot who steered a plane with two dead engines into a safe landing on New York’s Hudson river, re-enacts the FAA investigation of the co-pilot formally wresting the chain of command from Captain Jeffrey Skiles. Captain Sully, in the FAA’s transcript, and reenacted by Tom Hanks in the feature film, is at full attention long before he has to grab chain-of-command with the verbal cue “my aircraft”, after which Captain Skiles acknowledges with the response “your aircraft.” The movie scene copies the FAA transcript verbatim6, showing Sullenberger to be an alert backup pilot long before this point.
Something I laid out March 18th in Tracing #50, under the section Level #3 is the Number One “A.I.” Issue, bears repeating. Of the five levels of human-machine cooperation in A.I. with aircrafts, level #3 is the murkiest, and the most difficult level in which to apportion accountability. This next newsclip is from “Marketplace Tech” then hosted by Molly Wood, who I mentioned above. Here Wood is interviewing Navy Fighter Pilot Missy Cummings who is now director of the Humans and Autonomy Lab at Duke University:
May 02 2018 Self-driving cars still need our help and that might be a problem (Marketplace Tech on NPR) - “Then the car hands over control back to the human. And this is the deadliest phase. And in fact I'm pretty much against level 3, I don't think it should exist at all. Because one thing I know as a former fighter pilot: having a human step inside the control loop at the last possible minute is a guaranteed disaster.”
Had it not been for an insurrection and then a pandemic, this 2018 “A.I. level three” report would have been followed up on. But it’s been ignored, and it needs oxygen.
Mar 28 2018 I-TEAM EXCLUSIVE: Victim who died in Tesla crash had complained about Autopilot (KGO ABC 7 San Francisco San Jose) - “Walter Huang's family tells Dan Noyes he took his Tesla to the dealer, complaining that -- on multiple occasions -- the Autopilot veered toward that same barrier -- the one his Model X hit on Friday when he died.“
Health Tech: Patient Beware, Doctor Beware
Corporate tech leaders who are not engineers are telling their investors they plan to make a lot of money off A.I. products they’re looking to quickly develop, then push into healthcare. To train new A.I. products, their engineers are going to need to scrape a lot of healthcare data protected by aging privacy laws. As Harvard tech security lecturer Bruce Schneier said in his latest book8, the hacker’s ethos is not to change rules up front, but to find loopholes through them or work around them. An ethos to follow the letter of the law while violating the spirit is commonly known as “bad faith.”
If these leaders’ tech companies rush untested products into these spaces, healthcare chains of command will grow foggy. Human nodes of accountability will evaporate. Injured parties – be they patients, be they doctors – could have fewer chances for redress.
As we’ve seen through the pandemic, actors with an agenda can fund scientific reviews that get uploaded to preprint servers. Some agenda-studies even make it through and get published in journals then laundered through news reports authored by overworked journalists, only for those studies to be retracted by those journals when the editors are alerted in the right way.
Technology companies have also funded academic studies that serve their agenda:
Nov 24 2015 Google's insidious shadow lobbying: How the Internet giant is bankrolling friendly academics—and skirting federal investigations (Salon) - “The Federal Trade Commission (FTC) had opened multiple investigations into whether the tech giant illegally favored its own shopping and travel sites in search engine queries; restricted advertisers from running ads on competing sites; and copied rival search engines’ results. To fight this threat, Google turned to a key third-party validator: academia, and in particular one university with a long history as an advocate for corporate interests. From the beginning of the FTC investigation through the end of 2013, Google gave George Mason University’s Law and Economics Center (LEC) $762,000 in donations, confirmed by canceled checks obtained in a public records request.”
So healthcare leaders and health journal editors may need to exercise extra skepticism about any new A.I. product getting a lot of buzz. A.I. is an old technology that very suddenly has tens of billions of dollars lined up behind it. And if corporate leaders can increase the A.I. use, based on what some of them are saying, more A.I. will get them closer to their dream of what I call their “A.G.I. superbrain,” which I can’t get into here.
That’s all I have to say about information future for one issue. A.I. could be used for some very cool products, but only if we get grounded in the real world as this development wave is launched. I hope these points landed and you found some of them useful.
I’ll be shaking up the newsletter schedule, so the next issue won’t necessarily arrive on a Thursday as it once did. Until then, remind hospitals they can get sued if they ditch masks and your loved one contracts COVID9, and get four more free tests from COVID.gov/tests before May 12.
If you can, gather with strangers or friends while soaking up some COVID-killing UV rays.
1 Reporters Without Borders, Reporters Sans Frontiers: https://rsf.org/en/2023-world-press-freedom-index-journalism-threatened-fake-content-industry?data_type=general&year=2023
2 NewsGuard https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/
3 Wood and Swisher talk Tesla on Pivot https://www.iheart.com/podcast/358-pivot-with-kara-swisher-an-29927080/episode/adam-neumanns-return-climate-change-tech-101071117/
4 Musk Has Turned Tesla’s ‘Failing’ Into Winning https://www.washingtonpost.com/business/energy/musk-has-turned-teslas-failing-into-winning/2023/02/21/2cd24416-b1e2-11ed-94a0-512954d75716_story.html
5 Musk at Recode, video cued to his statement: “A Model S and a Model X, at this point, can drive with greater safety than a person. Right now.”
6 Captain Sullenberger’s Hudson River flight transcript
https://nypost.com/2009/06/09/us-airways-flight-1549-transcript/
7 Five Levels, National Highway Transportation Safety Administration
8 Book Review: A Hacker’s Mind | How the Powerful Bend Society’s Rules, and How to Bend Them Back by Bruce Schneier
https://datebook.sfchronicle.com/books/booksreview-digital-tech-advances-ai-spur-hacking-of-17772899
9 Hospitals That Ditch Masks Risk Exposure
https://blog.petrieflom.law.harvard.edu/2023/02/20/hospital-liability-covid-infection/