AI Literacy

We define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace. We conducted an extensive review of literature (see paper) and distilled a set of key AI literacy competencies and considerations for designing AI literacy learning interventions, which can be used to guide future educational initiatives as well as foster discussion and debate in the AI education field. This page lists and describes the competencies and design considerations that we have outlined.

List of AI literacy competencies and design considerations


  1. Recognizing AI: Distinguish between technological artifacts that use and do not use AI.
  2. Understanding Intelligence: Critically analyze and discuss features that make an entity “intelligent”, including discussing differences between human, animal, and machine intelligence.
  3. Interdisciplinary: Recognize that there are many ways to think about and develop “intelligent” machines. Identify a variety of technologies that use AI, including technology spanning cognitive systems, robotics, and ML.
  4. General vs. narrow: Distinguish between general and narrow AI.
  5. AI’s Strength and Weaknesses: Identify problem types that AI excels at and problems that are more challenging for AI. Use this information to determine when it is appropriate to use AI and when to leverage human skills.
  6. Imagine Future AI: Imagine possible future applications of AI and consider the effects of such applications on the world.
  7. Representations: Understand what a knowledge representation is and describe some examples of knowledge representations.
  8. Decision-Making: Recognize and describe examples of how computers reason and make decisions.
  9. ML Steps: Understand the steps involved in machine learning and the practices and challenges that each step entails.
  10. Human Role in AI: Recognize that humans play an important role in programming, choosing models, and fine-tuning AI systems.
  11. Data Literacy: Understand basic data literacy concepts such as those outlined in (Prado & Marzal, 2013).
  12. Learning from Data: Recognize that computers often learn from data (including one’s own data).
  13. Critically Interpreting Data: Understand that data cannot be taken at face-value and requires interpretation. Describe how the training examples provided in an initial dataset can affect the results of an algorithm.
  14. Action and Reaction: Understand that some AI systems have the ability to physically act on the world. This action can be directed by higher-level reasoning (e.g. walking along a planned path) or it can be reactive (e.g. jumping backwards to avoid a sensed obstacle).
  15. Sensors: Understand what sensors are, recognize that computers perceive the world using sensors, and identify sensors on a variety of devices. Recognize that different sensors support different types of representation and reasoning about the world.
  16. Ethics: Identify and describe different perspectives on the key ethical issues surrounding AI (i.e. privacy, employment, misinformation, the singularity, ethical decision making, diversity, bias, transparency, accountability).
  17. Programmability: Understand that agents are programmable.

Design Considerations

  1. Explainability: Consider including graphical visualizations, simulations, explanations of agent decision-making processes, or interactive demonstrations in order to aid in learners’ understanding of AI.
  2. Embodied Interactions: Consider designing interventions in which individuals can put themselves “in the agent’s shoes” (Druga et al., 2019) as a way of making sense of the agent’s reasoning process. This may involve embodied simulations of algorithms and/or hands-on physical experimentation with AI technology.
  3. Contextualizing Data: Encourage learners to investigate who created the dataset, how the data was collected, and what the limitations of the dataset are. This may involve choosing datasets that are relevant to learners’ lives, are low-dimensional, and are “messy” (i.e. not cleaned or neatly categorizable).
  4. Promote Transparency: Promote transparency in all aspects of AI design (i.e. eliminating black-boxed functionality, sharing creator intentions and funding/data sources, etc.). This may involve improving documentation, incorporating explainable AI (Design Consideration 1), contextualizing data (Design Consideration 3), and incorporating design features such as interpretative affordances or the Sim-City Effect.
  5. Unveil Gradually: To prevent cognitive overload, consider giving users the option to inspect and learn about different system components; explaining only a few components at once; or introducing scaffolding that fades as the user learns more about the system’s operations.
  6. Opportunities to Program: Consider providing ways for individuals to program and/or teach AI agents. Keep coding skill prerequisites to a minimum by focusing on visual/auditory elements and/or incorporating strategies like Parsons problems and fill-in-the-blank code.
  7. Milestones: Consider how developmental milestones (e.g. theory of mind development), age, and prior experience with technology affect perceptions of AI—particularly when designing for children.
  8. Critical Thinking: Encourage learners—and especially young learners—to be critical consumers of AI technologies by questioning their intelligence and trustworthiness.
  9. Identity, Values, Backgrounds: Consider how learners’ identities, values, and backgrounds affect their interest in and preconceptions of AI. Learning interventions that incorporate personal identity or cultural values may encourage learner interest and motivation.
  10. Support for Parents: When designing for families, consider providing support to aid parents in scaffolding their children’s AI learning experiences.
  11. Social Interaction: Consider designing AI learning experiences that foster social interaction and collaboration.
  12. Leverage Learners’ Interests: Consider leveraging learners’ interests (e.g. current issues, everyday experiences, or common pastimes like games or music) when designing AI literacy interventions.
  13. Acknowledging Preconceptions: Acknowledge that learners may have politicized/sensationalized preconceptions of AI from popular media and consider how to address, use, and expand on these ideas in learning interventions.
  14. New Perspectives: Consider introducing perspectives in learning interventions that are not as well-represented in popular media (e.g. less-publicized AI subfields, balanced discussion of the dangers/benefits of AI).
  15. Low Barrier to Entry: Consider how to communicate AI concepts to learners without extensive backgrounds in math or CS (e.g. reducing required prerequisite knowledge/skills, relating AI to prior knowledge, addressing learner insecurities about math/CS ability).