AI Ethics Manifesto

Queryflo by Data-Techcon · Published January 1, 2026

Our public commitment to responsible AI in education.

"AI in education should make people more capable, not more dependent. Every AI feature we build asks: does this help the learner understand more deeply, or does it do the thinking for them?"

— Data-Techcon Engineering Team

AI Augments, Not Replaces
We design AI outputs to explain reasoning, not just provide answers. The learner must engage.
Full Transparency
AI-generated content is always labeled. Limitations are disclosed. No black-box outputs.
Fairness & Neutrality
Challenge scenarios avoid demographic bias. We review content for representation regularly.
User Wellbeing
Gamification (streaks, XP) is designed to motivate, not to exploit. We don't dark-pattern users.
Data Sovereignty
Your data belongs to you. We don't sell it, trade it, or use it to train AI models.
Safety by Design
SQL runs in a sandboxed read-only environment. AI prompts are constrained to educational scope.

1. Purpose-Bounded AI

Every AI feature in Queryflo is scoped to a specific educational purpose. The Code Reviewer checks SQL governance. The Exec Summary teaches data storytelling. The AI Judge teaches query evaluation. None of these features are designed or permitted to function outside their educational scope. We explicitly constrain AI prompts to prevent off-topic or harmful outputs.

2. Honest About Limitations

AI-generated SQL and analysis can be wrong. Large language models hallucinate, miss edge cases, and reflect biases in training data. We communicate this clearly in the platform UI and in this document. We do not present AI outputs as authoritative — we present them as starting points for learning and critical evaluation.

3. No Weaponized Engagement

Streak counters and XP systems are designed to reward genuine learning milestones, not to trigger compulsive usage. We do not send late-night notifications, deploy loss-aversion messaging, or use dark patterns to inflate engagement metrics. Engagement that doesn't serve the learner is not a metric we optimize for.

4. Responsible AI Provider Selection

We use Anthropic's Claude API because Anthropic's Constitutional AI approach and published safety research aligns with our governance values. We evaluate our AI provider relationships on an ongoing basis against our ethical standards. We do not use AI providers that cannot demonstrate responsible development practices.

5. Bias Identification & Remediation

We acknowledge that AI models reflect biases present in their training data. We commit to: (a) reviewing AI outputs for demographic, cultural, or professional bias on a quarterly basis; (b) updating challenge scenarios and prompts when bias is identified; (c) publishing a summary of identified and remediated biases in our annual ethics review.

6. Our Ongoing Commitment

This manifesto is a living document. As AI capabilities evolve and new ethical challenges emerge, we will update our practices and commitments. We welcome feedback from our community. If you believe we are falling short of these commitments, contact us at ethics@data-techcon.com.