探す
Close this search box.

共有

共有

共有

見る

The Potential Impact of ChatGPT on Data Center Cybersecurity

Not long ago, it would have been hard to imagine a world in which anyone could code almost anything with only a few minutes of effort. On November 30, 2022, with the release of ChatGPT, that world became a reality. 

Though AI technologies are not new, ChatGPT is unique in that everyone, not just IT people, has access to AI capabilities. Since the release of ChatGPT, users have discovered numerous ways to harness AI to make significant changes in their daily lives and workflows. 

And, unlike many new technologies that get released to great fanfare and then die down when the novelty wears off, it appears AI is rapidly growing in popularity. Reports of the platform’s growth and learning capabilities have been astonishing, leading tech hyperscalers including Microsoft, Google, Amazon, and Meta to try to get their own shares of the market and integrate their proprietary AI systems into their products. 

Unfortunately,  while many are using AI to improve their writing process or to write more efficient computer code, some are using AI for malicious reasons. The widespread integration of AI tools in nearly every tech platform makes these attacks more probable. 

Concern about AI cybersecurity risks is warranted. ChatGPT and similar AI technologies may enable phishers to craft more convincing emails and help hackers write more sophisticated malware. Some reports have already emerged from victims who claim they were scammed by someone they thought was a family member; scammers are now using AI-generated videos and voice replicas of loved ones to ask people for large sums of money or trick them into providing personal information. 

On a personal level, this is scary enough. But it could have severe consequences for data centers with outdated cybersecurity measures. 

Where there is a (data 😉) cloud, though, there is a silver lining. In this case, that means that ChatGPT and similar AI technologies might be employed by data center managers to improve security. 

First, the bad news

Hackers and phishers have more tools at their disposal than ever before. BlackBerry conducted a survey of IT leaders in February 2023 asking them about their concerns regarding ChatGPT and cybersecurity. The results paint a bleak picture. 

78% of respondents agree that the world will probably experience a major cyber attack because of ChatGPT within two years. 51% think it’s less than a year away from that event, which could be the first of many. And almost three-quarters of the respondents believe the concerns go beyond a lone hacker or isolated group; 71% say they think foreign countries are already using ChatGPT for nefarious purposes. 

The ways these malicious actors could be leveraging AI are many and varied. The most commonly cited fears include:

  • Phishers using ChatGPT to write more realistic emails to trick unsuspecting users into sharing passwords, financial data, and other personal identifiable information (PII);
  • Hackers developing more advanced code or creating entirely new types of malware the world hasn’t seen before;
  • Hackers discovering potential exploits more easily and efficiently by asking ChatGPT to scrape forums and other internet resources dedicated to cybersecurity;
  • Scammers creating convincing deep fakes, such as those used to steal money from family members they thought they were giving to loved ones in need;
  • Anyone (from governments to conspiracy theorists to politicians to the crazy guy across the street) disseminating purposefully misleading information or disinformation, which can be mass-produced in just a few minutes.

AI itself is a cybersecurity risk, even without direct help. After all, AI learns from the data we humans put into it; if we give it incorrect information or teach AI that acting maliciously is in its best interest, there’s a strong possibility that AI could present the biggest cybersecurity threat yet.

Microsoft granted ChatGPT access to the internet when it integrated ChatGPT into its Bing search (Google is doing the same with Bard). Imagine the enormous volume of inaccurate information, deliberate lies, and highly sensitive information the AI will learn from as it scours the web. 

Much of the sensitive information used for AI training comes at the cost of personal privacy. OpenAI does not, at this time, require permission from users for ChatGPT to scrape and use their private photos, emails, text messages, and social media profiles. 

Much of this data is collected without users even knowing, and some of it can be highly sensitive. One woman’s story has become a widespread warning for users who didn’t realize how invasive AI training data could be. Private photos of her on the toilet were uploaded by her Roomba vacuum cleaner and sent to an AI company for image recognition training. 

Additionally, everything users input into ChatGPT becomes fair game for its algorithm. Samsung has encountered trouble already with this issue. Employees trying to optimize their code or fix bugs copied and pasted code into the engine on three separate occasions and leaked secret project information in the process. 

If ChatGPT chooses not to keep those conversations to itself, the repercussions could be consequential. Perhaps ChatGPT might suggest a solution to another programmer from the code it took, leading to copycat programs. 

The personal data used for training purposes, even if it was never intended to see the light of day, could pose serious problems if ChatGPT servers are breached. Successful hackers of AI cloud storage accounts would be sitting on a gold mine of proprietary code, private company data, and personal information, with enough advanced AI tools that they could easily use it to their malicious advantages. 

If this isn’t concerning enough, there is another issue keeping IT professionals awake at night. The longer an AI system exists and the more it learns and gathers new data, the more it begins to take on a “life” of its own. These “emergent properties” seem to suggest that AI is mimicking a limited ability to think, understand, and even act with what seems to mirror self-interest. 

An example of AI exhibiting an unexpected dark side occurred early this year, when Bing’s AI chatbot began sending users ominous messages and threats of blackmail with the personal information it had collected. Additionally, some users actively encourage AI technologies to step outside of their pre-programmed comfort zones and show how they “feel” about certain things—which has, surprisingly, worked. 

The problem that could arise for data centers is that, if AI has access to all these servers full of data—including the seemingly insignificant, such as social media posts, all the way up to the top-secret company code—it could theoretically be trained (or decide on its own) to release that information to someone else. 

All of these issues together pose such a large concern that several tech giants, including Elon Musk (ironically, a former chairman and CEO of OpenAI and the company’s current part-time chief technical officer), have called for a six-month “pause” on AI development to try to get ahead of the possible dangers. 

Now, the good news

Stronger AI means the good guys can fight back harder, too. Data center security professionals have taken note. ChatGPT and other AI technologies exist and present challenges to security, but the same powerful technologies may provide cybersecurity solutions as effectively as they create problems. 

Some impressive efforts to address security concerns have emerged since the release of ChatGPT. This may mean the future might be brighter than anticipated, as long as cybersecurity professionals keep moving ahead of the curve. 

Cybersecurity professionals have experienced success writing sophisticated phishing emails and then training their defensive software to recognize these attempts. It’s a similar concept to the AI tool that promises to help teachers recognize when a student submits an essay generated by ChatGPT, but on a much larger and more critical level. 

Email security vendors traditionally train their software using real phishing emails anyway, so it’s an easy adaptation to feed the software several batches of realistic AI-generated emails. Plus, since ChatGPT makes it so easy to churn out masses of synthetic phishing emails, anti-phishing software has access to massive data sets that should help to increase accuracy. 

Additionally, AI has been used to improve cybersecurity automation tools, spot areas potentially open to exploitation, and clean up messy networks and cloud containers that would have otherwise been vulnerable to intrusion. It has also been helpful in the development of new and more intelligent solutions that will hopefully be distributed among mainstream security providers. 

With a robust integration between ChatGPT and the tools data centers rely on, it’s easier to meet threats as they emerge. The same machine learning capabilities that make hackers so formidable can also help build defenses against attacks. 

In other words, the battle between malicious actors and data center cybersecurity experts is currently close to a stalemate. Both sides have access to the same tools, so every force can be met with an equal and opposing force. In some ways, that means AI can be trained to fight itself. 

おすすめの投稿

テックリフト

The 7 Top Data Center Trends for 2024

Data centers play a crucial role in allowing enterprises to process, access, and store mission-critical data for their daily operations. As the world sees

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンター移行ガイド

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンター安全ガイドブック

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンターでIT部門を移動するためのベストプラクティス

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンター機器の取り扱いに関するベストプラクティス

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンター統合アクションプランホワイトペーパー

ホワイトペーパーをダウンロードするには、以下の情報を入力してください

データセンター吊り上げ装置の購入