Skip to content

Building Trustworthy AI – Schneier on Security

As AI tools become more prevalent in our daily lives, it is important to question their motives, incentives, and capabilities to ensure they are trustworthy. A trustworthy AI assistant needs to be under our control and transparent. The technology must be able to explain its reasoning to users and cite its sources. Users should also be in control of the data used to train and fine-tune the AI system. Building trustworthy AI will require systemic change and the ability for users to control the data and capabilities of the system.

While AI has the potential to be epoch-defining, the problem lies in who owns it. Today’s AI systems are primarily created and run by large technology companies for their benefit and profit. The transition from awe and eager utilization to suspicion to disillusionment is a well-worn path in the technology sector, and we can do better than this.

For AI to be truly trustworthy, a trustworthy AI system must be controllable by the user. It should show the user how it responds to them and be able to explain its reasoning to users and cite its sources. Users should be in control of the data used to train and fine-tune the AI system. Weighing risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. Building trust with an AI will be hard-won through interaction over time, and we will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions.

In conclusion, building trustworthy AI will require systemic change and the ability for users to control the data and capabilities of the system. We should question the motives, incentives, and capabilities behind AI tools to ensure they are trustworthy. Trustworthy AI will be hard-won through interaction over time, and we need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions.

Leave a Reply

Your email address will not be published. Required fields are marked *

nv-author-image