Results: The 100 systematic review articles contained 453 database searches. Only 22 (4.9%) database searches reported all six PRISMA-S items. Forty-seven (10.4%) database searches could be reproduced within 10% of the number of results from the original search; 6 searches differed by more than 1000% between the originally reported number of results and the reproduction. Only one systematic review article provided the necessary search details to be fully reproducible.
This blogpost looks at a Knowledge Specialist as a point of contact for three Research Units within this Trust. Any use or ideas for working with R&I, or promoting LKS services to them?
Conclusion: The results of this study show heightened complexity in ChatGPT-generated SCI texts, surpassing optimal health communication readability. ChatGPT currently cannot substitute comprehensive medical consultations. Enhancing text quality could be attainable through dependence on credible sources, the establishment of a scientific board, and collaboration with expert teams. Addressing these concerns could improve text accessibility, empowering patients and facilitating informed decision-making in SCI.
We developed a highly accurate, simple, transportable, scalable method to identify publications in PubMed and Scopus authored by anesthesiology faculty. Manual checking and faculty feedback are required because not all names can be disambiguated, and some references are missed. This process can greatly reduce the burden of curating a list of faculty publications. The methodology applies to other academic departments that track faculty publications.
In November, we held our inaugural gathering, welcoming 20 colleagues from various NHS trusts. Included as a reminder / inspiration in case anyone from our team is going to this, or will consider going.
Inspired by BBC Radio 4’s Desert Island Discs the Libraries at Lancashire Teaching Hospitals ran an initiative called Castaway Books. It's worth looking at their experience - engagement started well, but tailed off later. Keep this as evidence in case we want to trial something similar?
Evaluation using two widely accepted tools shows that most websites related to COVID-19 are reliable and useful for physicians, researchers and the public.
People had longer attention span for video-based patient info than for text, spent longer (so less efficient) but felt better informed afterwards. I know we don't do patient info at the moment but I thought this was worth putting on one side (i.e. wiki) for future reference.
This study adds to our understanding of key topics in social science research on COVID-19. The automated literature analysis presented is particularly useful for librarians and information specialists keen to explore the role and contributions of social science topics in the context of pandemics. To read the full article, choose Open Athens “Institutional Login” and search for “Midlands Partnership”.
Evidence surveillance was guided by practical considerations of efficiency and sustainability. A single PubMed search covering all guideline topics, limited to systematic reviews and randomised trials, is run monthly. The search retrieves about 400 records a month of which a sixth are triaged to the guideline panels for further consideration. Evaluations with Epistemonikos and the Cochrane Stroke Trials Register demonstrated the robustness of adopting this more restrictive approach. Collaborating with the guideline team in designing, implementing and evaluating the surveillance is essential for optimising the approach.
The findings can inform future research and practice on both individual and societal levels:
During times of uncertainty, mental health practitioners should actively educate their clients about the potential consequences of excessive health information-seeking. This can include behavioural interventions, such as controlled/limited exposure to news and social media at specific times of day and/or breaks from information overload.
Moreover, the study also supports the need to promote social media literacy skills to help young people and adults critically evaluate the information they encounter and discern credible sources vs. misinformation.
Practitioners can also encourage individuals to nurture their social support networks, as well as their self-care routines. Positive interactions can limit and counterbalance the negative impact of excessive information-seeking.
'As KNOWvember comes into view once more, here at Pennine Care we have been reflecting on our activities for 2022.' Looks at a range of knowledge management activities including offering alerts based on evidence searches, using the KM tool, and established Teams channels to facilitate communities of practice.
Josiah Richardson is a Senior Library Assistant at an NHS trust, whilst also doing the Level 3 Library, Information and Archive Services Assistant CILIP NVQ. In this case study, Josiah discusses how AI has simplified and sped up reporting and increased his knowledge of Excel.
This document builds on previous NHS Digital guidance on digital inclusion for health and social care.
Use it to design and implement inclusive digital approaches and technologies, which are complementary to non-digital services and support.
This paper explores the potential of language models such as ChatGPT to transform library cataloging. Through experiments with ChatGPT, the author demonstrates its ability to generate accurate MARC records using RDA and other standards such as the Dublin Core Metadata Element Set. These results demonstrate the potential of ChatGPT as a tool for streamlining the record creation process and improving efficiency in library settings. The use of AI-generated records, however, also raises important questions related to intellectual property rights and bias. The paper reviews recent studies on AI in libraries and concludes that further research and development of this innovative technology is necessary to ensure its responsible implementation in the field of library cataloging.
This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.