AI Policy
Bastas Publications DOO (Bastas) supports the use of emerging technologies, including Artificial Intelligence (AI), to improve the research and writing process. This policy aims to help authors, reviewers, and editors evaluate whether AI technologies are used ethically, transparently, and in accordance with scientific integrity.
Authorship
We do not accept papers that are generated by Artificial Intelligence (AI) GEnAI, LMS or Machine Learning Tools. GEnAI and AI-assisted technologies cannot be listed as authors, co-authors, or cited.
Restricted Use
The use of generative artificial intelligence (GenAI) and AI-assisted technologies (e.g., ChatGPT, Perplexity AI, Gemini, Claude, DeepSeek, Copilot) is not considered appropriate for the main tasks of content creation or research design. We strongly discourage using GenAI to produce substantive scientific content, including full paragraphs, literature reviews, data analysis, results interpretation, figures, tables.
Authors using AI should avoid including hallucinated references in their articles. To prevent this, authors must verify the authenticity of references before submission and during peer review. The author is solely responsible for the accuracy of sources.
Permitted Use
AI can be used to improve the readability, structure, clarity, or language of text created by the authors. However, the use of AI tools is only permitted for the following purposes:
Transparency and Disclosure
Authors must clearly disclose any use of GAI in:
the cover letter, and
a statement at the end of the manuscript,
specifying:
Statement to be included in the manuscript:
AI statement: During the preparation of this manuscript, the author(s) used [NAME OF TOOL] for [PURPOSE]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article.
AI statement: No generative AI or AI-based tools used in preparing the manuscript.
For Editors and Reviewers
Editors and reviewers must maintain confidentiality in the peer review process.
- GAI must not be used to generate peer-review reports.
- Any use of AI for limited tasks (e.g., language checking) must be disclosed to the editorial office.
- Reviewers remain fully responsible for the accuracy, fairness, and integrity of their evaluations.
For more information on the issue please consult to these articles:
IPA Work on Artificial Intelligence
The Role of Artificial Intelligence in Publishing
STM: Generative AI in Scholarly Communications
Academic publishers and AI do not need to be enemies
COPE Position Statement: Authorship and AI tools
Chatbots, Generative AI, and Scholarly Manuscripts