ACS (American Chemical Society) has integrated Writefull's unique Manuscript Categorization service into its publishing workflow, using it to automate the classification of manuscripts after acceptance based on language quality. Writefull’s Manuscript Categorization API generates data that empowers publishers to increase workflow efficiency and positively impact post-acceptance production costs.

Automatized manuscript classification

The vast majority of accepted manuscripts need copyediting, but the level of editing they need differs. Knowing how much language editing a manuscript requires gives publishers the data they need to build more efficient workflows. For example, publishers may use this data to assign manuscripts to designated resources or workflow paths that differ for manuscripts that need little editing versus those that need more work.

Most publishers either do not evaluate the language quality of individual manuscripts or evaluate it manually in a manner that is time-intensive and hard to scale, especially considering the growing number of manuscripts being published. Writefull’s API significantly speeds up this process by automatically categorizing batches of manuscripts. Categories are assigned using Writefull’s proprietary language models.

ACS’s use case

Hoping to automate the task of language evaluation, ACS started working with Writefull two years ago. Together, ACS and Writefull thoroughly vetted and shaped the Manuscript Categorization service as we know it today. ACS has now fully integrated Writefull’s API into their pipeline, using it to evaluate the language quality of all accepted manuscripts.

The API assigns classifications to manuscripts at scale without editors having to open documents and scan the text. This means that the integration significantly reduces the time ACS spends on manuscript evaluation.

The reliability of the API is key, as accurate categorization ensures that manuscripts get the editing required. ACS and Writefull have therefore thoroughly tested the API outputs and found an alignment of over 95% between the classifications of Writefull and ACS’s human-assigned values. Mismatches were mostly due to variability in human classification. While human evaluation is subjective and often based on the first part of the manuscript only, Writefull uses a fixed set of criteria and processes the entire document, making its automated method more robust and reflective of the manuscript as a whole.

More information

Would you like to learn more, or discuss how Writefull may fit your publishing workflow? Please contact us at