As digital technology began to emerge in the early 1970’s Memorex entered the consumer media market with a series of legendary TV commercials featuring Ella Fitzgerald. In those commercials, she would sing a note that shattered a glass while being recorded to a Memorex cassette. The tape was played back, and the recording also broke the glass, asking “Is it live, or is it Memorex?” This would become the company slogan which was used in a series of advertisements released through 1970s and 1980s.
Clearly the reliable duplication of voice was just a first move in technology now enabling duplication of identity and a believable Artificial ME (“Ai Me”).
The implications for our financial system based on highly trusted middlemen as well as entertainers and authority figures is now squarely in doubt in some mediums. This is a conclusion reached by Wall Street Journal reporter Joanna Stearn writing on April 28 in an article titled “I Cloned Myself With AI. She Fooled My Bank and My Family (click to read)”.
Given we all rely on face to face and voice relationships with trusted third parties like bankers, lawyers, accountants, tax return preparers, money managers, doctors, law enforcement and elected officials, hijacking identity is a threat to all “trust based” systems. Here is Ms. Stearns experiment in her own words:
“Over the past few months, I’ve been testing Synthesia, a tool that creates artificially intelligent avatars from recorded video and audio (aka deepfakes). Type in anything and your video avatar parrots it back. Since I do a lot of voice and video work, I thought this could make me more productive, and take away some of the drudgery. That’s the AI promise, after all. So, I went to a studio and recorded about 30 minutes of video and nearly two hours of audio that Synthesia would use to train my clone. A few weeks later, AI Joanna was ready.”
Ms. Stearn caught my attention when her column then disclosed that her AI Joana was convincing enough to fool a biometric voice recognition test at one of the leading banks in the US. While this was just a first level identity query, it still shocked me because I use a brokerage company where I have been assured for more than 3 years by a recording “at this brokerage company your voice is your identity.”
She also fooled her sister and parents with AI Joana phone calls, but not with video. Apparently, the video did not show facial contortions, breathing or other typical talking gestures like hand movements, pauses, intonations or body language, but Ms. Stearn concluded that while the AI Joana did not fool everyone it was probably good enough for social media: “On TikTok, where people have the attention span of goldfish, those computer-like attributes are less noticeable.” Just consider how You Tube could be hijacked by avatars impersonating anyone in the public eye.
Can Ai ME Initiate Wire Transfers
Ms. Stearn also grasped the implications of an avatar stand-in for zoom and other conference calls and while the technology has not yet advanced to put Ai Me Ferris Bueller back in his seat at Glenbrook High School, it may be good enough for the remote commerce world that is now becoming a fixture in post Covid business interactions. For example, I wonder whether an Ai Me could initiate a wire transfer or fool a group of Board members on hastily convened board of directors conference call? What about press releases and earnings calls? Anywhere remote interactions are replacing face to face meetings there is the possibility of successful Ai Me fraud.
What About Public Figures
Reputations can be lost in a millisecond on social media and only repaired over decades. What if tampering with voting machines becomes secondary to artificial candidates? What if press conferences can be faked? What if pronouncements by police commissioners and board of health officials can be faked or controversial public figures like Elon Musk can be hijacked by giving fake phone interviews about Tesla or Space X?
In Person Infrastructure
Ai Me comes calling at a vulnerable moment for an entire economy working remotely. Think about a phony IRS deficiency notice followed by a zoom call with an Ai Revenue Agent? How about a credit agency like TransUnion calling about a credit downgrade or your sister calling with an urgent request for a money gram? I asked ChatGPT about our financial system’s potential vulnerability to avatars impersonating clients, depositors and taxpayers. Here is what it said:
Financial systems and institutions can be vulnerable to fraudulent avatars impersonating clients, depositors, and taxpayers. This vulnerability arises because fraudulent avatars can often successfully mimic the behavior and characteristics of legitimate users, making it difficult for financial institutions to detect and prevent fraud.
One way that fraudulent avatars can impersonate clients is by using stolen identity information to open bank accounts or apply for loans. In some cases, criminals may also use deepfakes or other advanced technologies to create realistic video or audio recordings of someone’s voice or facial expressions, which can be used to further convince financial institutions of their legitimacy.
Similarly, fraudulent avatars can also impersonate depositors and taxpayers in order to gain access to their accounts or to fraudulently claim tax refunds. In these cases, criminals may use stolen account information or personal identification details to pose as legitimate depositors or taxpayers.”
What started with credible facsimile of Ella Fitzgerald’s voice is now becoming believable duplications of identity. In 1944 when Ms. Fitzgerald was only 27 years old she may have had a premonition of her coming role in this this technological journey when she recorded one of her most memorable, early songs…. “I’m Making Believe.”
The above commentary is for informational purposes only. Not intended as legal or investment advice or a recommendation of any particular security or strategy. Information prepared from third-party sources is believed to be reliable though its accuracy is not guaranteed. Opinions expressed in this commentary reflect subjective judgments based on conditions at the time of writing and are subject to change without notice.