Article Excerpt: At many schools, the initial efforts set out to give staff a broad understanding of how AI works and its weaknesses. Those instructions include the workings of large language models (LLMs) — the foundational technology that enables many AI tools to understand, analyze, and generate human language; the transparency (or lack thereof) about where an AI tool gets its information and how it generates answers; how AI tools produce errors; the risk of uploading data and private information into AI tools; and bias that infiltrates AI tools because of the materials from which they learn.
“We have to teach people principles and critical thinking about AI,” says Thomas Thesen, PhD, director of the digital health and AI curriculum at the Geisel School of Medicine at Dartmouth in New Hampshire. He says this fundamental understanding —“how to apply it [AI], how to check it, how to see where it’s appropriate” for various uses — will serve users well as AI capabilities evolve to handle specific tasks.
Full Article: https://tinyurl.com/yvvmy5r9
Article Source: AAMC