Recent breakthroughs in generative AI have seen companies using this technology to improve processes and save time. ChatGPT stands out as one of the most popular tools. However, with the risk of the spread of misinformation, possible hallucinations, and copyright issues, how are the businesses that are using this AI technology addressing possible challenges of ChatGPT?
Press releases published this year
- ChatGPT is considered an effective tool by most users
- What is ChatGPT used for in UK companies?
- 60% of users meticulously review the outputs from ChatGPT
- 76% of ChatGPT users use the tool daily
- Cybersecurity, over-reliance, and misinformation are key concerns surrounding ChatGPT
- Businesses should set guidelines to combine ChatGPT with human expertise
Over half a year has passed since ChatGPT was launched in November 2022. The artificial intelligence (AI) tool already had 100 million users two months after its release. It has been difficult to avoid the hype and constant information around this generative AI technology’s successes, challenges, and opportunities.
For businesses using ChatGPT, questions may arise on whether ChatGPT can replace humans and, if so, for what tasks. Chatbots have been helping businesses automate customer queries for years, and other conversational AI platforms have provided automated solutions for building customer engagement and human-computer interactions. ChatGPT has raised the bar regarding natural language processing and machine-based content generation.
Following up on our research piece on the use of generative AI tools in UK businesses, we asked employees which tools they use. ChatGPT is the most used tool, with 62% of respondents saying they use this AI technology.
We surveyed ChatGPT users on how they use this technology, the benefits they get from it, and their concerns regarding the tool. This article will discuss how UK businesses can effectively use ChatGPT while offering actionable advice on verifying outputs and addressing transparency issues. Readers will gain insights on how to integrate ChatGPT into workflows while being aware of its limitations and ensuring compliance with guidelines.
The full methodology can be found at the end of the article.
What is ChatGPT?
ChatGPT is a generative AI (GAI) tool. These tools use AI to create new and original content such as images, videos, code, or text, based on prompts. ChatGPT, in particular, can create original written content, including responses to questions and commands.
ChatGPT also falls under the scope of Large Language Models (LLM), which can generate human-like text in almost any language, including coding language.
Developed by OpenAI, the GPT in the name stands for Generative Pre-Trained Transformer. This model provides information obtained through machine learning techniques. The AI tool’s collected data can come from the internet, articles, blogs, books, news, or other online sources.
By learning from such large amounts of data, ChatGPT deploys deep learning and natural language understanding to analyse texts, recognise patterns, and generate answers that simulate human-like conversations and are informed by the wealth of knowledge it has sourced.
This tool can be used in many different ways—it can explain or summarise concepts, help brainstorm ideas, translate texts, write stories, draft emails or letters, or even help write appeals for activities such as challenging a parking fine.
However, businesses should be aware that it is not exempt from controversies or challenges. AI models like ChatGPT can provide biased or inaccurate data. Sometimes it can controversially make up sources or references (known as “hallucinations"), and some countries have banned it. This emphasises the need for users to quality check the data outputs ChatGPT delivers to ensure they are authentic, accurate, and compliant with legal requirements, industry standards, and codes of conduct.
ChatGPT is considered an effective tool by most users
In order to become an invaluable tool across industries, ChatGPT must provide outputs that cater to the demands of its users. In this regard, and despite being a new technology, our survey respondents are overwhelmingly satisfied with the tool's effectiveness.
When asked to rate the results of ChatGPT based on their experience, 55% of respondents say that the results are highly effective, with another 42% of respondents saying the results are effective.
What is ChatGPT used for in UK companies?
Given the wide array of uses for ChatGPT, we wanted to discover how employees are most commonly implementing the tool.
According to our respondents, ChatGPT is most commonly used for text editing (37%), data analysis (35%), and writing copy (35%).
GAI can help employees carry out tasks that may seem relatively mundane, but can save them time, such as writing emails, automating customer requests, editing texts, or brainstorming keywords around a topic.
However, some tasks that ChatGPT can perform should be approached with caution. Given that ChatGPT can write text in a human-like manner, businesses may be tempted to reduce costs and save time by letting AI write content that requires more depth, such as articles, press releases, or reports. By doing so, they may be running some risks.
60% of users meticulously review the outputs from ChatGPT
ChatGPT can scrape information from sources without verifying authenticity or copyright rules. Also, the tool’s knowledge is mostly limited to information before 2021. When combined with the technology’s tendency to invent facts, there is a risk of plagiarism, inconsistencies, or false information that can be detrimental if businesses do not catch them.
We asked our survey respondents how closely they review the outputs delivered by ChatGPT.
According to our survey, all ChatGPT users check their outputs. However, the level of detail they apply to this varies. While 60% are meticulous in their review of ChatGPT’s outputs, the remaining respondents are less consistent.
Our survey results further show the importance of reviewing ChatGPT’s outputs when we analyse how often users have to correct or edit the information the tool provides, on average:
- 34% say they need to correct less than 20% of responses
- 42% correct 20-50% of responses
- 24% have to correct over half of all ChatGPT outputs
How can businesses verify the accuracy of ChatGPT outputs?
Here are four ways managers can review and authenticate the outputs on ChatGPT:
- Fact-check: It isn’t reliable to ask ChatGPT to name its information sources because you may receive false feedback. However, teams should cross-reference statistics or statements with trusted websites and references to ensure accuracy.
- Check for plagiarism: Since ChatGPT generates text based on training data, some content may be similar to existing sources. Businesses can use plagiarism checkers to compare the generated text against a vast database of published material to avoid plagiarism.
- Review with humans: Dedicated staff can verify the accuracy and quality of generated content and provide internal feedback and insights on the outputs This feedback can be delivered through surveys or with reporting tools. The results of this feedback can help managers establish internal guidelines for those who use the tool.
- Refine your prompts: Employees should consider multiple iterations of prompting and reviewing outputs to fine-tune the content they receive. Your prompts can be optimised by comparing different outputs and identifying places for improvement.
76% of ChatGPT users use the tool daily
As employees discover the varied uses for ChatGPT, they are already integrating these tools into their everyday work routine. We wanted to know how often employees turn to ChatGPT to help them.
Over three-quarters of respondents use ChatGPT daily— 44% use the tool between three and ten times a day, and two in ten employees use ChatGPT more than ten times a day.
ChatGPT can help improve work processes for users
You can’t judge the proficiency of a tool just by determining how often it is used. While our results show that employees are incorporating ChatGPT as one of their daily office tools, we wanted to know why. What are the main benefits respondents were getting from using ChatGPT at work?
The top benefits that ChatGPT users identified are:
- Improved workflows (47%): Businesses can leverage ChatGPT to automate daily tasks such as summarising daily reports, drafting emails, and streamlining processes to make workflows smoother and more efficient.
- Creative outputs (39%): ChatGPT’s writing capabilities go beyond drafting emails. The tool can create ad headlines, call-to-actions, story outlines, and FAQs, or brainstorm topic clusters, among other features that can help many departments, such as marketing teams.
- Better data analysis and insights (38%): Businesses can receive quick and accurate summaries from data inputs that can help them improve their services and act on these insights.
- Reduced costs (34%): ChatGPT, even in its free version, can sometimes replace other tools in a business’s tech stack by acting as a versatile virtual assistant, ultimately lowering costs on extra tools or external agencies. Also, automating tasks can help companies prioritise quality over quantity for tasks like proofreading or content optimisation. However, ChatGPT cannot provide all the features other specialised software can provide, so caution must be taken to make sure that ChatGPT is not deployed only to cover some basic business needs.
Cybersecurity, over-reliance, and misinformation are key concerns surrounding ChatGPT
We have already mentioned some of the disadvantages and flaws of ChatGPT. But what are the main worries users may have when using these tools? Only 7% of respondents express no concern at all about ChatGPT.
Cybersecurity is a concern for ChatGPT users
Many respondents have varied concerns regarding security, such as hacking. This underscores the importance of deploying security measures. With UK companies increasingly falling victim to ransomware and cyberattacks, businesses can implement cybersecurity measures, such as being vigilant against phishing and social engineering, avoiding sharing private information, and thoroughly reviewing OpenAI’s data privacy policies. They should also protect their OpenAI accounts with strong passwords that enable two-factor authentication.
How can companies prevent the spread of misinformation when using ChatGPT?
Respondents have concerns regarding the spread of misinformation due to overtrusting ChatGPT’s outputs. With ChatGPT ultimately being a black box, as internal workings and coding are invisible to the user, the verification, transparency, and traceability of answers can be difficult to determine.
To counter misinformation and transparency issues, businesses can implement fact-checking and verification processes and provide transparent disclaimers about ChatGPT’s limitations should they choose to use the tool without human assistance. In this case, businesses should regularly review the responses and retrain the ChatGPT model with new data to improve its accuracy, avoid using prompts that may lead to biased or inaccurate outputs, and report any results that may seem concerning to OpenAI.
Furthermore, companies should encourage user feedback and create channels to report inaccuracies and express concerns. By using channels that can include online surveys, social media monitoring or customer satisfaction tools, businesses can improve the performance of ChatGPT.
How can businesses address their overreliance on ChatGPT?
Nearly a third of respondents are also worried about the possible over-reliance on ChatGPT to perform tasks. This further highlights the need to find a balance between AI automation and human expertise.
To mitigate overreliance, businesses can manage and assign specific tasks for ChatGPT to leverage the strengths of the tool and human agents. By defining employee roles and carrying out effective training to teach employees to work alongside ChatGPT, businesses can retain human talent where it matters most.
While ChatGPT can offer many benefits, it also has limitations. These include a lack of emotional intelligence and a dependence on data that can lead to potential misuse or misinterpretation. Businesses should be aware of these limitations, train their employees, and establish guidelines to address these challenges.
Businesses should set guidelines to combine ChatGPT with human expertise
Our survey shows that while employees using generative AI tools have swiftly adopted ChatGPT and are reaping its benefits, there are still concerns surrounding the tool. On one hand, survey respondents are confident that ChatGPT can currently create content that rivals humans—41% of respondents say that ChatGPT could definitely rival human creations, while 49% consider that it can do so to some extent.
On the other hand, concerns about transparency, security, and the risk of over-reliance on the tool are also coupled with the need for regulations and guidelines to ensure compliance when using these tools. UK universities have already agreed on principle guidelines for GAI tools, including ChatGPT. The UK government has also set guidelines for its civil servants, and the National Cyber Security Centre (NCSC) has also signalled its awareness of the risks of LLM and Generative AI models.
Businesses should take heed of these steps and implement their own guidelines and regulations to safeguard the transparency and integrity of their content while training their staff to efficiently use and still leverage the benefits of tools like ChatGPT to improve their work processes.
Methodology
To collect the data for this report, Capterra conducted an online survey of 496 employees in the UK in 2023. The participants were selected based on the following criteria:
- UK resident
- Between the age of 18 and 65
- Employed full or part-time
- Uses a computer/laptop to complete your daily tasks at work
- Uses generative AI for their work
- This article focuses on 306 study participants who reported using ChatGPT