Insights

Demystifying ChatGPT for Government Leaders

Data & Analytics, Innovation at LMI, Artificial Intelligence, Applied AI

Generative artificial intelligence (AI) technologies—like Open AI’s ChatGPT—have significant potential to disrupt and influence workplace culture. But what are generative AI and ChatGPT? How are they applied? What are the risks? As stories of ChatGPT’s successes (such as passing a U.S. medical licensing exam) emerge daily, government leaders need to quickly address these questions. 

What is ChatGPT?

ChatGPT is an application (or chatbot) leveraging natural language processing (NLP) to produce content (written language, code, etc.) in response to user prompts. ChatGPT’s popularity on social media channels has quickly made it well-known to the public. Through OpenAI’s generative pre-training (GPT) language model, ChatGPT interprets and replicates human language in text or speech. ChatGPT trains on a large data set, which it references to generate text similar to human responses to questions or prompts (think Turing Test-esque).

Cartoon designed by ChatGPT; image produced by Stable Diffusion.
Cartoon designed by ChatGPT; image produced by Stable Diffusion.


Other forms of AI like ChatGPT include AI-powered language models, such as OpenAI's GPT-3, Google's Bidirectional Encoder Representations from Transformers (BERT), and Facebook's RoBERTa. These AI models use deep learning techniques to process and generate text based on input and prior training on large data sets.

lf you prompt ChatGPT to “Explain AI to a novice in one sentence,” it responds with “AI, or artificial intelligence, is the simulation of human intelligence in machines that are programmed to think and learn like humans.” However, that answer may change over time as each use helps ChatGPT produce even better results.                                                                                                        

Since ChatGPT is a machine learning model, it does make mistakes and cannot understand context and emotions as humans do. Therefore, these technologies work best when used as human-in-the-loop capabilities, with the model functioning as an accelerator for generating content for review and refinement by a human with domain expertise and practical reasoning skills. We call this new dynamic an AI-enabled workforce.

Brain

“Google has invested almost $400 million in artificial intelligence startup Anthropic, which is testing a rival to OpenAI’s ChatGPT, according to a person familiar with the deal.” (Source: Davey Alba and Dina Bass, “Google Invests Almost $400 Million in ChatGPT Rival Anthropic,” Bloomberg, February 3, 2023.)

What does this rapid evolution of AI mean for government missions and what should leaders consider when exploring using generative AI technologies in their organizations? Government leaders will need to rethink their own skills, those of their workforce, and the impact to the work environment.

How is ChatGPT applied today?

While these NLP applications continue to evolve, they already improve processes in the public and private sectors:

  • Robotic process automation: automate repetitive tasks, such as data entry and analysis, freeing government employees to focus on more complex and important tasks.
  • Customer service: implement chatbots to answer common questions from citizens, reducing the workload on government call centers and offering faster and more accurate information than that available from human operators.
  • Predictive analytics: predict future trends, such as in crime statistics, enabling government organizations to address issues proactively.
  • Decision-making: offer government officials valuable insights and recommendations, better informing decisions.
  • Fraud detection: detect and prevent fraudulent activity.
  • Knowledge management: make information easier to access and process, leading to more efficient processes.
  • Policy making: analyze the potential outcomes of policies.

As AI evolves, even more sophisticated and powerful systems will emerge. In the future, ChatGPT will improve its natural language understanding to better respond to complex questions and requests. This upgrade will lead to more advanced dialogue systems, capable of maintaining coherent and natural-sounding conversations.

The integration of ChatGPT with other technologies, such as virtual and augmented reality, will become more widespread, further enhancing its capabilities across the United States Department of Health and Human Services and the Department of Defense, among others. The use of ChatGPT and other AI technologies will expand across a range of federal domains, including healthcare, budgeting and auditing, and research and development, as organizations leverage these technologies to improve their operations, enhance the customer experience, and embrace innovation. In addition, ChatGPT can revolutionize research by assisting in uncovering new areas of study, generating hypotheses, and even helping design experiments.

Yet, to truly maximize these new capabilities, a human is critical to wield the various applications. Knowing which application to use for a problem or how to iterate on the outputs for a more refined and useful output is a necessary part of this innovation for the foreseeable future.

What are the risks and challenges?

Despite the potential of ChatGPT and other AI-enabled workforce applications for decision-makers, risks and challenges exist and will continue to emerge. Beyond the business and societal implications, leaders must consider the following:

  • Lack of data diversity: ChatGPT is trained on a large data set of text. If the data set is not diverse, the model may not understand or respond correctly to different perspectives and cultures, leading to biases and errors. Its outputs are based on historical inputs regardless of appropriateness or accuracy.
  • Data security: As text identification and synthesis tools, ChatGPT and other solutions represent minimal security risk. However, the data input becomes part of the training data set, effectively in the public domain.
  • Lack of interpretability: ChatGPT's decision-making process can be difficult to understand, justify, or explain.
  • Safety concerns: ChatGPT could have unintended consequences, such as spreading misinformation or automating decision-making with negative effects on people.
  • Lack of regulation: Regulations around this technology are lacking, causing confusion and uncertainty about how it should be used and by whom.
  • Difficulty in scaling: Scaling computationally intensive language models to meet the demands of large-scale applications is challenging.
  • Limited understanding: Some users have an incomplete understanding of the capabilities and limitations of the model, resulting in unrealistic expectations and disappointment with performance.

Individuals and teams should not incorporate ChatGPT or AI-enabled workforce capabilities into their operations without understanding how they work, what they do well, and where limitations or risks remain. Government agencies must use the technology responsibly and protect customer data. These challenges are common issues with AI in general.

Like many innovations, ChatGPT straddles the line between scary and exciting. Because of ChatGPT’s ease of use and potential to solve organizational problems, it is advancing widespread adoption of AI. The contributions of ChatGPT, and other AI technologies like it, to work culture will be significant. Getting ahead of their widespread use will help leaders keep up with the pace of need.

Having read this thought piece, can you tell which parts ChatGPT produced?

...technologies work best when used as human-in-the-loop capabilities, with the model functioning as an accelerator...

Keith Rodgers Headshot

Keith Rodgers

Sr. Vice President, Digital & Analytic Solutions Meet Keith

Keith Rodgers

Sr. Vice President, Digital & Analytic Solutions

Keith brings nearly two decades of experience in leveraging innovative techniques to assess organizational performance and challenges.

Brant Horio Headshot

Brant Horio

Sr. Fellow, Applied Research & Partnerships Meet Brant

Brant Horio

Sr. Fellow, Applied Research & Partnerships

Brant Horio leads our strategic research group, integrating resources, technologies, and partnerships with applied research to advance LMI's innovation mission.