Skip to content

Responsibility

The Responsible Use documentation aims to guide developers in using language models constructively and ethically. Toward this end, we've published guidelines for using our API safely, as well as our processes around harm prevention. We provide model cards to communicate the strengths and weaknesses of our models and to encourage responsible use. We also provide a data statement describing our pre-training datasets.

 
If you have feedback or questions, please feel free to let us know — we are here to help.


Harm Prevention. We aim to mitigate adverse use of our models with the following:

  • Responsible AI Research: We’ve established a dedicated safety process that conducts research and development to build safer language models, and we’re investing in technical (e.g., usage monitoring) and non-technical (e.g., a dedicated team reviewing use cases) measures to mitigate potential harms.   
  • Responsibility Council: We’ve established an external advisory council made up of experts who work with us to ensure that the technology we’re building is deployed safely for everyone.
  • No online learning: To safeguard model integrity and prevent underlying models from being poisoned with harmful content by adversarial actors, user input goes through curation and enrichment prior to integration with training.
We have an Information Security Program in place that is communicated throughout the organization. We prioritize the protection of our users' data through a comprehensive Information Security Program that adheres to industry-leading standards, including ISO/IEC 27001 and SOC 2. Our commitment to security extends across all aspects of our operations, ensuring the integrity, confidentiality, and availability of information assets.