Beyond Netflix: The Critical Need for AI Literacy Among Indigenous Tech Leaders
Safeguarding Indigenous Knowledge in the AI Revolution
prompt: A Māori tech leader surrounded by holographic AI interfaces, its a face-off for survival, photorealistic style --ar 16:9 --v 6.1
Talofa reader,
I saw a post by a Māori Entrepreneur & Investor who had recently launched an AI company, which left me flabbergasted.
In the post, he advises his followers to watch a documentary on Netflix, where the first episode discusses AI. He then remarks on the things he learned from this documentary, on Netflix, in September 2024.
This individual was surprised to learn about the extensive data requirements of large language models like ChatGPT. He only just discovered that these AI systems are trained on massive amounts of internet data, encompassing a wide variety of content. This includes not only openly available information but also potentially copyrighted material, and it's not clear how much of each was used.
The person emphasised that the training data for these models isn't limited to formal publications or books. Training datasets also absorbed personal content such as blog posts and social media interactions. As a result, these AI systems are exposed to a broad spectrum of human expression, which most likely included indigenous knowledge.
He's only just learned this, two weeks after launching his AI company that weaves AI with Indigenous knowledge.
I had to sit there, staring at the screen doing the "beautiful mind" meme, trying to make sense of the world.
How?
How are people that far along the AI investment and implementation path, especially Māori or Indigenous people, and they don't know the very first thing about the technology they're using?
This is the type of thing I'm seeing more of, and if indigenous values are actually important to people who claim they are, and they're more important than money, then this is a troubling trend of a lack of AI literacy amongst indigenous tech leaders, and it needs to be addressed.
Intersection of AI & Indigenous
What this person has learned from this documentary is what everyone who works with and studies the literature of LLMs has known for some time. AI systems, particularly large language models, ingest vast amounts of data from across the globe, which inevitably incorporates traditional wisdom, cultural practices, and unique perspectives from indigenous communities, i.e., the blogs, newspapers, and online writings from those communities.
While the main talking points I've seen a lot lately when it comes to AI and Indigenous communities have been language preservation and revitalisation, the risk, which is always, always on the table with this, is the misappropriation or misrepresentation of this information without proper context or consent.
Have you solved for that, indigenous folks?
So, understanding the far-reaching impact of AI is crucial, as these technologies can influence how indigenous knowledge is perceived, used, and potentially commercialised on a global scale.
And maybe that's my gripe right there - that it's our own people commercialising indigenous knowledge, but not doing the due diligence to be good stewards and custodians of that knowledge.
What's the solution?
AI literacy training, or something.
AI Literacy for Indigenous Tech Leaders
If that person's post is an example of where indigenous tech leaders are with AI literacy, we're in big trouble.
It's troubling enough to see someone relying on mainstream entertainment as a source of education; however, what's more concerning is the possible lack of comprehensive understanding about AI as a whole in indigenous tech circles.
To be clear, AI literacy goes way beyond knowing how to prompt an AI to build an app or generate an image. True AI literacy for indigenous tech leaders encompasses a deep understanding of the full spectrum of AI's impact within the indigenous context. This includes:
Ethical Considerations: Understanding the potential for AI to perpetuate biases or misrepresent indigenous cultures.
Data Sovereignty: Recognising the importance of protecting and controlling indigenous data used in AI systems.
Cultural Preservation: Exploring how AI can be used to preserve and promote indigenous languages and traditions.
Economic Implications: Assessing how AI might affect indigenous economies and job markets.
Legal and Regulatory Landscape: Understanding the evolving legal framework surrounding AI and its implications for indigenous rights.
Technical Foundations: Grasping the basic principles of how AI systems work, including their limitations and potential.
Risk Assessment: Identifying potential risks of AI adoption and strategies for mitigation.
Opportunities for Innovation: Recognising how AI can be leveraged to address unique indigenous challenges and opportunities.
Indigenous tech leaders need to develop a nuanced understanding of AI's pros and cons, risks and opportunities, capabilities and limitations. This comprehensive knowledge is crucial for making informed decisions about AI adoption and development that align with indigenous values and interests.
Without this level of AI literacy, we're risking a lot.
What's at Stake Here for Indigenous?
The impact of AI development and adoption for indigenous communities presents both massive opportunity and serious potential harms.
Uninformed or misinformed decision-making by tech leaders lacking a comprehensive understanding of AI's capabilities and limitations could lead to misguided strategies or implementations that fail to serve or even harm indigenous interests. This knowledge gap increases the risk of indigenous communities being left behind in the AI revolution or, worse, becoming unwitting fodder in systems that may exploit or misrepresent their cultural heritage.
Particularly concerning is the potential misuse or mishandling of indigenous knowledge within AI systems. Large language models, trained on massive amounts of internet data, may inadvertently incorporate sacred or sensitive indigenous information without proper context, consent, or respect for cultural protocols.
This could lead to the inappropriate dissemination of protected knowledge, cultural appropriation on a grand scale, or the distortion of traditional wisdom when filtered through AI algorithms.
Without informed leadership and careful oversight, AI technology inadvertently becomes a tool that perpetuates historical patterns of exploitation and marginalisation, rather than empowering indigenous communities in the digital age.
When indigenous folks only know about "Data Sovereignty" as the risk with LLMs, that potentially means that person doesn't understand LLMs and the AI topic well enough.
What are the risks and concerns that are already known about LLMs?
AI Ethical Concerns
I was actually in the middle of researching the positive and negative effects of relying on AI for research and learning when I read the LinkedIn post that this post drew inspiration from.
The paper I was reading, "The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review", had this brilliantly summarised list of ethical concerns with respect to AI.
The key points are:
AI hallucinations: Generation of inaccurate or misleading information
Algorithmic biases: Potential to perpetuate or amplify existing societal prejudices
Plagiarism: Reproduction of content without proper attribution
Privacy concerns: Issues related to accessing and processing vast amounts of personal data
Transparency issues: Lack of clarity in AI decision-making processes
Ethical implications: Need for responsible development and deployment of AI technologies
These points represent widely known AI ethical risks that any tech leader, including those in indigenous communities, should be familiar with to make informed decisions about AI adoption and development.
Conclusion
In conclusion, the intersection of AI and indigenous knowledge presents both unprecedented opportunities and serious challenges.
The apparent lack of comprehensive AI literacy among some indigenous tech leaders is concerning and needs urgent attention. This knowledge gap, in my humble opinion, exposes communities and indigenous knowledge to massive risk of exploitation and misrepresentation.
Moving forward, it is super critical that indigenous tech leaders prioritise developing a nuanced understanding of AI technologies, their implications, and ethical considerations. This goes beyond surface-level knowledge of AI applications to gain a deep grasp of AI's potential impacts on everything to do with culture, data sovereignty, and community well-being.
Ultimately, the goal should be to empower indigenous tech leaders to become not hacks or basic implementers of AI, but informed, critical, and innovative players in the global AI landscape.
And you can't do that by learning AI basics from a Netflix documentary.
Thanks for reading, see you in the next one.
Ron.