Skip to main content
AI Act Compliance - Oro

Would you like to learn more about Oro's compliance, our AI assistant?

Updated over a month ago

At Tomorro, we are dedicated to ensuring that Oro, our AI Assistant, meets the requirements of the EU AI Act, delivering a solution that is transparent, safe, and ethical. Oro is categorized as a low risk AI system, and we apply necessary measures to ensure compliance.


Risk category

  • Control: Classification of Oro under the EU AI Act as a low-risk AI system.

  • What it is exactly: Oro is used exclusively for business-related tasks such as contract summarization, OCR, clause analysis, and risk identification, which are considered low risk under the EU AI Act guidelines. It does not engage in high-risk applications like biometric surveillance or critical decision making in healthcare or law enforcement.

  • How we are compliant: Oro has been assessed internally against the EU AI Act’s risk criteria and is categorized as low-risk due to its operational scope.


Transparency commitments

  • Control: Clear communication of AI-generated outputs and functionalities.

  • What it is exactly: Oro ensures that users can always identify when they are interacting with an AI system, in line with Article 50 of the EU AI Act.

  • How we are compliant:

    • AI interactions are labeled under the name "AI Assistant" wherever displayed.

    • Text generated by the AI Assistant is identifiable, fully readable, and copyable.


Assurance of safety and ethical use

  • Control: Ensuring safe, transparent, and ethical use of AI functionalities, including acknowledging and addressing potential errors.

  • What it is exactly: Oro is designed to provide clear and reliable results for tasks. However, as with any AI system, it may occasionally produce errors or incomplete interpretations.

  • How we are compliant:

    • Transparency: Users are explicitly informed that AI outputs are suggestions and may require review for accuracy.

    • Safety: By acknowledging potential errors, Oro encourages users to validate outputs before making decisions.

    • Ethics: AI outputs are designed to assist rather than replace user judgment, ensuring that the user retains full control over decision-making.

    • Error Handling: Feedback mechanisms are in place to allow users to flag errors, which helps improve the system over time.


Modification and handover obligations

  • Control: Transparency in the event of substantial modifications or handover of AI systems.

  • What it is exactly: Compliance with Articles 3 and 25 of the EU AI Act, which require clear documentation for any substantial modifications to the AI system or handover to a third party.

  • How we are compliant:

    • We maintain and provide technical documentation (e.g., system design, risk assessments) to ensure that third parties making modifications can meet compliance obligations.

By structuring our measures around these key controls, Tomorro ensures that Oro is a reliable, transparent, and compliant AI solution aligned with the EU AI Act.

Did this answer your question?