Security & privacy

    Gradeo applies AI to education with an approach centered on support, transparency, privacy, and caution: it helps teachers prepare useful proposals without replacing human review or automating the final academic decision.

    Responsible AI and Gradeo's role

    At Gradeo, we believe that applying artificial intelligence in educational contexts requires more than good results: it requires clarity about the role of AI, respect for privacy, and a product design that keeps people in control. That is why Gradeo is designed as a support tool for teachers. Its role is to help prepare grading, feedback, and tutoring proposals with greater consistency and less operational burden, without replacing the teacher's professional judgment.

    Our approach to use

    We design Gradeo around one central idea: AI assists, the teacher decides, and the final decision is not formalized automatically inside the application. This means Gradeo is not intended as an autonomous decision-making tool, but as a support system that helps prepare a useful, reviewable, and contextualized proposal.

    What Gradeo can do

    Depending on the workflow and chosen configuration, Gradeo can analyze academic documents, apply rubrics more consistently, propose criterion-based scores, draft structured feedback, generate supporting results or reports, and support more consistent grading and tutoring processes. All of this relies on context provided by the teacher, such as the rubric, instructions, reference materials, preferred style, and process configuration.

    What Gradeo does not do

    Gradeo is not intended to replace the teacher's judgment, issue a valid final academic decision on its own, or become the sole basis for a final grade. In the product's current state, Gradeo ends with an AI-assisted proposal. Final review and definitive use of the result must take place outside the application, according to the teacher's judgment and the applicable organizational framework.

    Principles for responsible use

    Responsible AI use in education requires recognizing that not all decisions have the same impact. The more academically significant an activity is, the more important it becomes to review outputs with human judgment, avoid inappropriate automation, understand that AI proposals may contain errors or inconsistencies, and adapt usage to the school's or organization's policy when one exists. That is why our approach is based on several principles: transparency about AI involvement, clarity about the system's limits, caution in sensitive contexts, and reinforcement of the teacher's role as the person responsible for the final decision.

    Data that may appear during use

    At Gradeo, privacy is not treated as an add-on, but as a core part of product design and responsible use. When Gradeo is used, data may include student documents, names or other identifiers, academic observations, feedback, and grading or tutoring results. For that reason, the recommended practice is always to share only the necessary data, avoid unnecessary personal information, limit the exposure of results, and work in line with the privacy policy of the school or organization whenever one exists.

    Minimization, anonymization, and caution

    Our privacy approach starts from a simple principle: less data is better, as long as the teaching purpose can still be achieved. In practice, this means providing only the context that is genuinely useful, avoiding irrelevant documents or personal data, using anonymization when the workflow allows it, and exercising extra caution when minors or sensitive information are involved.

    Who can access information and results

    Access to results should be limited to the teacher or teaching team responsible for the process, and any later communication should respect the applicable academic and organizational context. We recommend avoiding the broad circulation of AI-generated proposals or results without prior review, especially when they could be misinterpreted as final decisions or have meaningful consequences for students.

    Institutional use in B2B settings

    When Gradeo is used through a university, school, academy, or another organization, the framework for use is shaped by the client's internal policy, its privacy rules, its criteria for retention and communication of results, and its teaching and organizational processes. In that context, Gradeo should be integrated within the governance and compliance framework defined by the institution.

    Individual use in B2C settings

    When Gradeo is used individually, the user has more direct responsibility for what information is entered, whether the use is appropriate for their professional activity, and how the AI-generated proposal is reviewed afterward. In these cases, extra caution is especially important when minors are involved, when results have significant consequences, or when the information being handled is particularly sensitive.

    Minors and higher-sensitivity contexts

    Some situations require an enhanced level of care, such as use involving minors, especially sensitive personal information, activities with high academic impact, or results that may be shared widely. In these contexts, the recommendation is clear: minimize data, review more carefully, avoid allowing an AI proposal to circulate as if it were a final decision, and always apply the relevant legal and organizational framework.

    Transparency and human review

    Trust in an AI-based educational tool depends on users understanding when AI is involved, what kind of output they are receiving, and which parts of the process still depend on human review outside the product. That is why Gradeo is designed to communicate clearly that outputs generated by the system should be interpreted as assisted proposals, not autonomous final decisions.

    Practical relationship with GDPR

    From a GDPR perspective, our principle is straightforward: reduce unnecessary data as much as possible, support use that is proportionate to the teaching purpose, and help ensure the product can be used within responsible organizational frameworks. This does not remove the need for each institution, professional, or client to assess its own legal fit, but it does define a clear product and usage direction centered on privacy, proportionality, and minimization.

    Practical relationship with the EU AI Act

    The EU AI Act has raised the level of scrutiny applied to AI systems used in sensitive contexts, including education. Gradeo's practical response to that regulatory environment is based on three ideas: AI should be used as support rather than as an automatic substitute for the teacher, transparency about the system's role is essential, and higher-impact or more sensitive scenarios require more cautious and controlled use. Rather than presenting AI as blind automation, Gradeo is positioned as a tool designed to help teachers work with greater consistency, better traceability, and a stronger basis for later review.

    Recommended best practices

    Regardless of context, we recommend configuring the rubric, context, and instructions carefully, entering only the information that is necessary, interpreting the output as an AI-assisted proposal, reviewing important or sensitive cases more carefully, and resolving the final decision outside Gradeo according to the teacher's judgment and the applicable framework.

    What we recommend avoiding

    For responsible use, we recommend not treating the output as the sole basis for finalizing a grade, not uploading more personal data than necessary, not sharing results without reviewing their context, and not using the tool outside your organization's policy if you work within an institution.

    Providers and data processing

    We may rely on technology and AI providers to deliver the service, always within the contractual and operational framework that applies in each case. Even so, our product approach remains the same: limit unnecessary data, support uses that are proportionate to the teaching purpose, and ensure that use of the tool fits within responsible processes for review, governance, and compliance.