Responsible Use

The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, as well as our processes around harm prevention. We provide model cards to communicate the strengths and weaknesses of our models and to encourage responsible use. We also provide a data statement describing our pre-training datasets.


If you have feedback or questions, please feel free to let us know — we are here to help.

Harm Prevention. We aim to mitigate adverse use of our models with the following:

Responsible AI Research

We’ve established a dedicated safety process that conducts research and development to build safer language models, and we’re investing in technical (e.g., usage monitoring) and non-technical (e.g., a dedicated team reviewing use cases) measures to mitigate potential harms.

Responsibility Council

We’ve established an external advisory council made up of experts who work with us to ensure that the technology we’re building is deployed safely for everyone.

No online learning

To safeguard model integrity and prevent underlying models from being poisoned with harmful content by adversarial actors, user input goes through curation and enrichment prior to integration with training.