Legal News, Editor-in-Chief
It’s hard to like Elon Musk, the world’s richest man at last look.
He has a love for the spotlight and much like another well-known promoter of debunked conspiracy theories, Musk is prone to use his celebrity megaphone to trash anyone who stands in the way of making his next billion.
And yet, Musk has warned that artificial intelligence could lead to the endangerment of civilization even while he continues to heavily invest in its development.
Musk – the founder of SpaceX and the CEO of Tesla and X (formerly known as Twitter) – said in an interview earlier this year that “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” to name just a few of his misgivings about its explosive growth in the past year.
For all of his quirks and controversies, Musk is a man to be taken seriously when it comes to the AI topic. After all, he was a founding member of OpenAI, the company behind ChatGPT, the artificial intelligence chatbot released in 2022 that can assist users with such tasks as composing e-mails, essays, letters, articles, and computer code.
Since its launch, the Microsoft-backed ChatGPT has attracted more than 180 million users, mesmerizing its followers with the ability to write poetry, term papers, scholarly articles, and daily news content, while also handling such mundane matters as grocery, packing, and to-do lists.
In a recent article appearing in the Michigan Press Association newsletter, Butzel attorneys Erin Malone and Jennifer Dukarski wrote: “AI is inspiring both awe and fright at its endless possibilities, as many new technologies do,” noting that it may provoke untold copyright infringement claims.
“Lawmakers, courts, and agencies are addressing a wide range of copyright issues relating to generative AI,” Malone and Dukarswki said.
“Examples of this include exploring the fact that many AI models utilized datasets incorporating information scraped from the internet or submitted by the public. As copyright law provides a series of exclusive rights, such as reproduction right for copyright holders to make copies of their works . . . there is a question of whether users of generative AI may have created infringing content by using AI. Reproducing content in these datasets or creating content based on protected work can lead to the risk of liability.”
Of course, AI models also can be put to good and effective use, as illustrated by the Associated Press. The news organization is reportedly producing 12 times more stories by training AI software to automatically write short news articles on routine business matters, thereby allowing its team of reporters to hammer out more in-depth pieces on subjects of consequence.
On the other hand, it could just as well be yet another road for disinformation and toxic content to travel, raising an even more sinister specter than what the advent of social media has wrought.
The traditional tug-of-war between the forces of good and evil, a battle that began when Adam and Eve were tempted by that tantalizing piece of fruit in the Garden of Eden, figures to have found a new battleground in the world of AI.
The mystery of why people choose good instead of evil is destined to be life’s most perplexing riddle, as we wonder why a paragon of virtue in the form of the late Mother Teresa – who did so much good in a selfless way to aid the poor and hungry – has given way to people like Vladimir Putin, the Russian president whose thirst for power and world dominance has brought death and destruction across neighboring Ukraine.
Perhaps AI can somehow articulate an answer for that.
In the meantime, some of the boundaries of AI were explored last spring in an episode of “60 Minutes,” which highlighted recent developments in artificial intelligence that have caused as much concern as they have scientific celebration.
“We may look on our time as the moment civilization was transformed as it was by fire, agriculture and electricity,” said CBS News anchor Scott Pelly in the opening of the popular Sunday night show. “In 2023, we learned that a machine taught itself how to speak to humans like a peer.
Which is to say, with creativity, truth, error and lies. The technology, known as chatbot, is only one of the recent breakthroughs in artificial intelligence – machines that can teach themselves superhuman skills.”
As millions of viewers would soon learn, this technological development is not framed in science fiction, but instead is in the same game-changing vein as splitting the atom for energy-generating purposes, according to social science experts.
Of course, the development of nuclear energy by way of the Manhattan Project in 1945 also produced devastating consequences – the destruction of two Japanese cities that were obliterated by two atomic bombs that brought a decisive and unsettling end to World War II.
Whether AI has that kind of explosive potential will be for its architects – and users – to decide, a fact that will ultimately determine how artificial intelligence and humanity coexist.
Peacefully and intelligently, we can only hope.
––––––––––––––––––––
Subscribe to the Legal News!
http://legalnews.com/Home/Subscription
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available