Gary Gensler, the Chair of the U.S. Securities and Exchange Commission (SEC), recently issued a stark warning to individuals and entities utilizing artificial intelligence (AI) to deceptively promote securities sales. He cautioned those engaging in dishonesty or misrepresentation to brace for “war without quarter.” This announcement is a direct response to the surge in the use of AI and machine learning across various financial technologies, including cryptocurrency, algorithmic trading, and blockchain. Gensler’s statement emphasizes the U.S. government’s determination to pursue both companies and individuals who falsely exaggerate the capabilities of their AI applications.
In line with this commitment, authorities have already started initiating fraud investigation against parties that utilize unethical practices to mislead and defraud investors. Ironically, to uncover such fraud and enforce the law to its fullest extent, government authorities are now being aided by their own AI systems. Agencies such as the Internal Revenue Service and the U.S. Treasury’s Financial Crimes Enforcement Network are leveraging AI-powered tools to analyze vast amounts of data and identify fraudulent activities more efficiently than ever before. This dual role of AI – as both a tool for committing fraud and a means of detecting it – underscores the complex and evolving relationship between AI technology and regulatory enforcement.

The Rise of AI-Backed Fraud
In recent years, the potential of artificial intelligence and machine learning to transform industries has been widely recognized. This is especially true in sectors such as fintech, banking, and finance where the handling of vast, complex, and often unstructured data is a common challenge. Within the finance and banking sectors, the advent of AI-powered analysis of large datasets has significantly accelerated service transformation. This technological advancement has facilitated the introduction of automated trading, enhanced risk management, predictive market analysis, and the deployment of chatbot-based customer service. Similarly, in the fintech space, AI and machine learning have been instrumental in the development of cryptocurrencies, algorithmic trading, and blockchain technologies, playing a pivotal role in reshaping the industry. Yet, this same technology has also enabled certain companies to overpromise and fraudulently exploit the capabilities of their AI applications or misuse the data that powers them.
For example, The Securities and Exchange Commission (SEC) recently announced a settlement with Brian Sewell and his company, Rockwell Capital Management, over fraud charges related to a deceptive scheme targeting students of Sewell’s online crypto trading course, the American Bitcoin Academy. The scheme reportedly swindled 15 students out of $1.2 million. The SEC’s complaint outlines that from early 2018 to mid-2019, Sewell enticed his students to invest in a promised hedge fund, the Rockwell Fund, which he claimed would utilize advanced technologies like artificial intelligence and crypto asset trading strategies to generate returns. However, the fund was never launched, and the trading strategies were never implemented, with the invested funds remaining in bitcoin until they were eventually lost in a hack of Sewell’s digital wallet.
Gurbir S. Grewal, Director of the SEC’s Division of Enforcement, highlighted the case as an instance of fraud exploiting buzzwords like artificial intelligence and cryptocurrency to lure investors into non-existent opportunities. Sewell and his company faced charges for violating antifraud provisions of federal securities laws and agreed to a settlement that includes injunctive relief, disgorgement, and civil penalties totaling nearly $1.83 million, pending court approval.
Additionally, a superseding indictment recently revealed charges against an Australian national and a Californian for orchestrating a cryptocurrency Ponzi scheme that defrauded investors of over $25 million. David Gilbert Saffron from Australia and Vincent Anthony Mazzotta Jr. from Los Angeles are accused of enticing investors with promises of high returns through trading programs purportedly powered by an artificial intelligence automated trading bot in the cryptocurrency markets. Operating under various names such as Circle Society, Bitcoin Wealth Management, and Cloud9Capital, the duo allegedly used the invested funds for personal luxuries including private jets, luxury accommodations, and private security, instead of actual cryptocurrency investment.
The scheme also involved the creation of a fictitious entity named the Federal Crypto Reserve, which they used to further deceive investors by offering services to recover their lost investments, further exploiting them financially. Saffron and Mazzotta employed various tactics to hide their identities and the origins of the victims’ investments, including using aliases and engaging in blockchain hopping to obscure the trail of cryptocurrency transactions. They now face multiple charges including conspiracy to commit wire fraud, money laundering, and obstruction of justice, with potential maximum sentences amounting to 20 years in prison for the major counts.

The Government’s AI-Powered Response
In a recent address, the Chief Operating Officer of the Justice Department emphasized the transformative power and dual nature of AI within the spheres of law enforcement and national security. By underlining AI as both a critical tool and a complex challenge, the official highlighted its deployment in tracing drug sources, analyzing public tips, and synthesizing evidence in significant legal cases. In particular, government agencies are progressively adopting AI and machine learning to bolster their fraud prevention strategies. Through the application of sophisticated algorithms and advanced techniques, these AI systems adeptly sift through large volumes of data, identifying early indicators of potential fraud.
For example, The U.S. Department of the Treasury and the Internal Revenue Service (IRS) have announced a comprehensive six-year plan to modernize IRS operations, highlighting the critical need for technological advancements to enhance service quality for American taxpayers. This initiative aims to assist individuals and organizations in fulfilling their tax obligations while upholding the principles of integrity and fairness in law enforcement. Former Treasury Secretary Steven Mnuchin emphasized the importance of this modernization for the IRS, which manages one of the globe’s largest and most complex business systems, to protect taxpayer data, improve taxpayer services, and ensure the robustness of the national tax system.
The modernization strategy is structured around four essential pillars:
- Improving the taxpayer experience through advanced technologies that simplify understanding and compliance with tax laws.
- Enhancing core taxpayer services and enforcement to streamline filing processes and utilize data analytics to prevent fraud.
- Updating IRS operations to increase efficiency and incorporate cutting-edge technologies like automation and AI for cost reduction and accuracy.
- Bolstering cybersecurity measures to protect sensitive taxpayer information against escalating cyber threats.
This plan promises unprecedented protections against data and refund fraud, aiming to safeguard against over 1.4 billion cyberattacks yearly, and adapt proactively to the evolving landscape of cybersecurity threats.
Furthermore, Attorney General Merrick Garland recently highlighted the necessity for the Department of Justice to adapt swiftly to technological advancements, ensuring it remains capable of fulfilling its core missions of upholding the rule of law, ensuring national security, and safeguarding civil rights. To this end, Jonathan Mayer has been appointed as the Chief Science and Technology Advisor, tasked with guiding the Justice Department through the intricacies of emerging technologies, particularly AI and cybersecurity, across its various components.
In his dual role, including serving as the Justice Department’s Chief AI Officer as designated by the President’s Executive Order, Mayer is set to play a pivotal role in steering the Department’s AI initiatives. His responsibilities involve leading the Department’s Emerging Technology Board, focusing on the governance and coordination of AI and related technologies. This appointment is part of a broader strategy to build the Department’s technological capacity, emphasizing the recruitment of technical talent and the development of a team within the Office of Legal Policy specialized in technology-related areas critical to the Department’s objectives.
Conclusion
In the near future, it’s clear that both the government and various industries will increasingly rely on AI and machine learning to enhance outcomes, identify illicit activities, boost worker efficiency, and leverage benefits to their fullest potential. As the government endeavors to craft intricate regulatory frameworks that can keep up with rapid technological advancements and curb AI-driven fraud and misconduct, it’s imperative for corporate leaders to collaborate closely with their tech teams to ensure that the deployment of AI and machine learning aligns with both existing and emerging regulations. The path forward for AI and machine learning within regulated sectors is set to be vibrant and ever-changing, demanding a deep understanding of legal, regulatory, and compliance strategies to sidestep the risks of AI-enabled fraudulent activities.






Leave a reply to Silk Cords Cancel reply