Within scholarly publishing, there are many long-standing uses of AI, albeit pre-generative. Emergence of powerful generative AI (ChatGPT, etc.) in the last year has been one of those paradigm-shifting events which comes with it a multitude of new opportunities but also significant challenges and need for policies to help navigate through this fast-evolving space. Potential threats include plagiarism, papermills, and fake data and, while our defenses continue to evolve, they will include detection software and oversight over our content. OUP is actively working to protect against AI bots crawling for content with “script” that blocks known bots from crawling and ingesting content from public/open webpages and by requiring express permission before any LLM can ingest any OUP-published content. We lead an industry-wide initiative, STM Integrity Hub which will allow us to aggressively target papermills as well as pilot a number of AI-based tools like pre-submission technical check and alt-text generation to name a few. To date, OUP has also implemented a series of recommended best practice for authors to declare any use of “Natural Language Processing tools” in preparation of their manuscript and for reviewers to not run manuscripts in peer review through any AI tools to ensure integrity in the research we publish in our journals. As AI evolves, OUP will continue our work of protecting the intellectual property, ensuring research integrity, and introducing AI-based innovations to improve publishing processes and experiences for our authors.