The recent criminal activity being undertaken via the chatbot is disappointing, but it ultimately is no surprise. When new technology is developed, there’s always a chance it can be used for harm.
Few could have foreseen the rapid rise in popularity AI chatbot, ChatGPT, would achieve since its launch in November 2022.
Having amassed over one million users within its first week, ChatGPT has propelled itself to the centre of AI and wider tech conversations through its powerful, human-like responses generated at the click of a button.
Now, with the development of GPT-4, the latest, more powerful version of the chatbot continues to ask questions of business’ cybersecurity credentials.
Despite its many uses, recent reports suggest that cybercriminals have moved swiftly to weaponise the chatbot.
In response to this emerging threat, business leaders from SoDA (Software Development Association of Poland) offer guidance on how businesses can mitigate the risk posed by GPT-4 and where the responsibility lies in this debate.
Dr Jerzy Biernacki, head of operations at Miquido, says: “GPT-4 has developed into a disruptor within artificial intelligence (AI) and demonstrates the powerful capabilities AI has to offer.
“The recent criminal activity being undertaken via GPT-4 is disappointing, but it ultimately is no surprise. When new technology is developed, there’s always a chance it can be used for harm.
“In terms of where the responsibility lies, we should expect greater strides being taken by OpenAI following its partnership with industry heavyweight, Microsoft. Implementing robust security measures to prevent unauthorised access or misuse of the service will likely be top of the agenda for OpenAI in the coming months.
“Ultimately though, it will be the end users who shoulder the most responsibility on GPT-4. OpenAI has launched a very powerful tool and a suitable education should be on-hand.
“Educating users on the security implications surrounding GPT-4 and the mechanics behind the chatbot will be pivotal moving forward and could potentially decrease the volume of criminal activity.”
Lukasz Brandt, senior security analyst at DAC.digital, says: “From a business perspective, the risk of falling victim to AI-generated code could affect an organisation on multiple fronts.
“The introduction of bugs or other weakening proxies could disrupt a business by data breach and network disturbances – both of which could subsequently lead to financial loss and reputational damage later down the line.
“Software developers and other industry bodies are equipped with the necessary skills and knowledge to help guide businesses through this confusing landscape.
“Businesses should consider bringing in the expertise of developers to help provide guidance and best practices for reviewing, testing, and deploying code in a production environment.
“This third-party expertise can help to reduce the risk of data breaches and other security incidents.”
Brandt outlines the role user authentication could provide to mitigate criminal activity on GPT-4: “Hackers are smart but pragmatic. The thinking behind user authentication is that anything that can slow down an attack, make it harder to automate and increase the necessary expenditures should be considered.
“However, security cannot be too burdensome for legitimate users – a balancing act for OpenAI to consider from user experience and security viewpoint.
“A solution is to use two-factor authentication based on mobile applications generating time-based, one-time passwords which expire after a short amount of time.
“Alternatively, other promising technology is self-sovereign identity based on blockchain – which, by its nature, cannot be tampered or infiltrated by criminals.
“The tools are out there and at the disposal of OpenAI to deploy in an effort to mitigate the growing threat posed by criminals looking to infiltrate the hottest new feature on the internet right now.”
Biernacki says: “Although the early indications surrounding the criminal activity being conducted across AI-powered chatbots are concerning, businesses should not panic.
“What we can see is that both the expertise from the development industry and security measures are available to help ensure criminal activity is kept to a minimum.
“Having the necessary user authentication processes in place will help OpenAI monitor the legitimacy of their users and the activities they are undertaking on the platform.
“Furthermore, businesses that may feel vulnerable to this emerging threat are able to seek the expertise of professional developers to provide security advice, guidance and best practice on identifying weaponised code developed via AI’s latest and most disruptive tool yet.”
Configit enables fast generation of configurable bills of materials
BlackBerry launches software development platform for next-generation vehicles and IoT systems
Gartner names Körber as a ‘leader’ in warehouse management systems