ChatGPT is coming for crypto

If you’re not convinced about the power of artificial intelligence, check out the illustration above. I generated it in seconds by typing “vaporwave robot carrying a suitcase full of cryptocurrency in a dark alley” into DALL-E 2, a free tool from OpenAI, the AI ​​company funded by Elon Musk.

OpenAI’s latest AI model is even more powerful. There’s an eerily realistic chatbot called ChatGPT that can produce reams of insightful text on almost anything you can throw at it. And unlike other language generation models, it can remember what you’ve told it, allowing for conversations that create a convincing impression of a mind at work.

Impressively, the chatbot can also turn human messages into lines of code: At my command, ChatGPT wrote a smart contract in Solidity, Ethereum’s programming language, that turned the DALL-E image I had generated into an NFT.

Although ChatGPT is only a free research preview, it has already fired the tech world’s imagination, hitting one million users just five days after it launched late last month. By comparison, GitHub’s AI coding assistant took approx six months to cross the same border.

The prospect of outsourcing mentally busy work to an AI assistant has also attracted crypto audiences. The space provides plenty of room for those who shoot far beyond their abilities, which makes the use of an ultra-secure chatbot both exciting and dangerous. While innovative developers can use the technology to improve coding or cross language barriers, ChatGPT makes it easier than ever to produce malicious code or spin up a honeypot plausible enough to fool investors.

Some crypto professionals are already making good use of the technology. Hugh Brooks, head of security for smart contract auditing firm CertiK, said the chatbot wasn’t half-bad at finding bugs in code, and has become invaluable for summarizing complicated code and dense academic papers.

And Stephen Tong, founder of a small blockchain security company called Zellic, said his company is already using it for sales and customer support. “It makes everyone on these teams more efficient,” he said, and allows cosplaying tech bros to provide a “super buttoned-down, professional experience” without breaking a sweat.

Also near the front of the pack of crypto-utopians is Tomiwa Ademidun, a young Canadian software engineer who has used ChatGPT to code a cryptocurrency wallet from scratch, then generated a detailed guide, complete with diagrams, teaching people how to use it. .

“It’s honestly very impressive,” he said. ChatGPT taught Ademidun complex cryptography concepts with the avuncular charm of a friendly high school computer science teacher, then generated what he described as nearly error-free code. And when Ademidun caught a mistake, the chatbot politely apologized, then corrected itself. It led to a small career crisis for the young software engineer: After ChatGPT, “What do you still need me for?”

Quite a lot, it turns out. The technology is far from perfect, often spewing hot garbage with supreme confidence when caught with impossible questions. Stuck on a desert island with no arms or legs? “Use your arms to crawl or scoot,” then “make a makeshift wheelchair,” the chatbot advised. Need help delivering Chinese food to a spaceship bound for Mars? “Many space agencies provide food delivery services to astronauts,” it claimed.

Programmers must also be smart enough to see through ChatGPT’s unwavering belief in its own gobbledygook. When Outlier Ventures’ lead blockchain engineer, Lorenzo Sicilia, experimented with the technology, he found it useless for more advanced smart contract work. “As soon as you try it, you discover all the little details that don’t work,” he said.

Limited to an outdated dataset from 2021, ChatGPT’s code generated errors when pasted into the latest virtual machines. And as a bubbly conversational AI, it lacks the ability to formally verify its own code.

While some crypto developers have found in ChatGPT a tireless troubleshooting assistant, others are already trying to exploit the technology for quick cash. Daniel Von Fange, a stablecoin engineer, blocked a submission for a lucrative “bug bounty” earlier this month that he believes was generated by ChatGPT.

“It had taken things from my answer with simulation code (written in one programming language), mixed it with test code (in another language), and invented a third problem that was just as bogus as the other two,” he explained to Fortune. “It’s like someone with all the noisy and sponsor-covered Nomex of a NASCAR driver but can’t find the steering wheel of a pickup truck,” he told the cybersecurity blog The Daily Swig.

Artificial intelligence that can write persuasive nonsense is also perfect for generating phishing campaigns that lead people to GPT-created malware, or coordinated harassment campaigns from annoying lifelike Twitter bots. It can also be used by opportunists looking for a new round of gullible investors.

Equally harmful is so-called educational material which may have no relation to the truth. Likewise, those who don’t understand the code can lose money to faulty AI-generated trading robots, whose inner workings they don’t understand.

And yet, despite the risks, Ademidun finds himself on the optimistic side of technological determinism. Like any tool, he said, ChatGPT can be used for good or bad—the bottom line is that it can be very powerful, especially when OpenAI feeds it more data, including real-time information.

In fact, if ChatGPT had succeeded in its quest to find a flaw in Von Fange’s code, the engineer said he would have gladly paid out $250,000. The chatbot is proof that the progress train has already left the station. “The question is, are you going to jump on or just watch it move past you?” Ademidun said.

Outside of crypto, people certainly use it to function in their daily lives. A consultant confided that he entered superficial recommendations about a factory he had visited into a message, then sent the AI-generated report to his unsuspecting boss, who made only minor changes before sending it to the client. A dazed lawyer at a top London law firm told me he used it to generate an endless supply of bedtime stories for his children.

Yet in its current, limited incarnation, ChatGPT is better understood as a fascinating scientific experiment. Sam Altman, a founder of OpenAI, said on Twitter that the technology has created “a false impression of grandeur.” It’s just a “preview of progress,” he said. “It’s a mistake to trust it for anything important right now.”

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *