AnswerHarbor

AI Content Policy

Last updated: April 22, 2026

mindfield.ai uses large language models (LLMs) to generate answers to your questions. This page explains how that works, what the limitations are, and how you should interpret and use the answers we provide.

How We Use AI

When you submit a query, we send it to a third-party LLM inference API (such as those provided by OpenAI, Anthropic, or similar providers). The model synthesizes a response based on patterns learned during its training. We may augment the model's response with retrieved web content or other sources to improve freshness and accuracy. The final answer is then presented to you, sometimes with source citations.

Accuracy & Factuality

AI-generated answers are not always correct.

LLMs are powerful tools, but they have well-documented limitations with respect to factual accuracy:

  • Hallucinations — the model may generate plausible-sounding but entirely fabricated information, including fake citations, statistics, quotes, or events.
  • Knowledge cutoffs — training data has a cutoff date. Answers about recent events, new research, or rapidly changing topics may be outdated or missing entirely.
  • Confidence calibration — models can present incorrect information with the same fluency and apparent confidence as correct information. Authoritative tone is not a reliable indicator of accuracy.
  • Nuance and context — complex, contested, or highly contextual questions (law, medicine, finance, relationships) may be oversimplified or fail to capture important caveats.
  • Mathematical and logical errors — LLMs can make arithmetic mistakes, flawed logical inferences, or systematic errors in structured reasoning tasks.

Not a Substitute for Professional Advice

Answers on mindfield.ai are for general informational purposes only. They are not a substitute for:

  • Medical advice — consult a licensed healthcare professional for any health or medical concern.
  • Legal advice — consult a licensed attorney before making any legal decisions.
  • Financial advice — consult a qualified financial advisor before making investment or financial decisions.
  • Safety-critical decisions — do not rely on AI-generated content in situations where errors could cause harm (e.g., emergency procedures, engineering or structural assessments).

Always verify important information with authoritative primary sources before acting on it.

Sources & Citations

Where the Service surfaces source links alongside an answer, those links are provided to help you verify and explore the topic further. However:

  • A cited source does not guarantee that the answer accurately reflects that source's content.
  • In some cases the model may generate a citation that does not exist or does not support the claim made.
  • Source content may have changed since it was indexed.

We encourage you to click through and read sources directly.

Bias & Perspective

LLMs are trained on large corpora of human-generated text, which reflects the biases, viewpoints, and perspectives present in that data. Answers may inadvertently reflect cultural, political, or demographic biases. On topics where reasonable people disagree, the model may present one view more prominently than others. We are actively working to reduce bias, but it cannot be entirely eliminated with current technology.

Content Moderation

We apply content safety measures at both the query and output level to prevent the generation of:

  • Content that facilitates illegal activity (e.g., instructions for creating weapons, committing crimes, or self-harm).
  • Hate speech, harassment, or content targeting individuals or groups.
  • Sexually explicit or graphic violent content.
  • Disinformation designed to deceive.

Despite these filters, no system is perfect. If you encounter harmful or inappropriate output, please report it to safety@mindfield.ai.

Your Queries & AI Training

We may share anonymized or aggregated query data with AI model providers for the purposes of model improvement, safety research, and abuse prevention, subject to our agreements with those providers. We do not share personally identifiable query data for model training without your explicit consent. See our Privacy Policy for more detail.

Feedback & Corrections

We want to improve the quality and accuracy of answers over time. If you receive an answer that is factually incorrect, harmful, or otherwise problematic, please let us know at feedback@mindfield.ai. Your reports help us improve content quality for everyone.

Changes to This Policy

AI technology and our use of it evolves quickly. We may update this policy to reflect new capabilities, risks, or practices. The "Last updated" date at the top of this page indicates when changes were last made.

Contact

Questions about our use of AI? support@mindfield.ai. Safety or content concerns: safety@mindfield.ai.