toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

272
active users

#librarians

8 posts8 participants2 posts today

Via Kelly Jensen: Has your library curtailed or outright banned anything related to Banned Books Week this year? I'm not talking that you're doing something else (i.e., freedom to read week). I'm talking YOU CAN'T DO ANYTHING related to the topic.

Tell me about it anonymously (at the link below)
tinyurl.com/banning-bbw25

If you are wondering why: states are banning CONTEXT for #BannedBooksWeek:
civilbeat.org/2025/10/hawaii-s

Library Friends - I hope you will consider attending this free event. Please boost if you know others would be interested.

"On October 22, the Centers for Digital Scholarship, the Digital Scholarship Group, and the NULab for Digital Humanities and Computational Social Science will host our annual fall scholarship celebration." #Libraries #Libraryevent #Librarians

cds.library.northeastern.edu/f

cds.library.northeastern.eduFall Scholarship Celebration – Center for Digital Scholarship

Good article from McGill's Office for Science and Society (Separating Sense from Nonsense)
Author shows some good results from 4 #AI tools, but also problems, like "a generosity toward debunked practices" (in medicine).
#academic #librarians

mcgill.ca/oss/article/critical

Office for Science and SocietyAI Comes for Academics. Can We Rely on It?By now, the fact that artificial intelligence can hallucinate is, I hope, well known. There are countless examples of platforms like ChatGPT giving the wrong answer to a straightforward question or imagining a bit of information that does not exist. Notably, when Robert F. Kennedy Jr’s MAHA Report came out, eagle-eyed journalists spotted fake citations in it, meaning scientific papers that do not exist and that were hallucinated by the AI that was presumably used to, ahem, help with the report. But what if hallucinations were a thing of the past? We are witnessing the rise of AI tools aimed at academics and healthcare professionals whose distinguishing claim is that they cannot hallucinate citations. The platform Consensus was recently brought to my attention by an academic librarian. Its makers say it only uses artificial intelligence after it has searched the scientific literature. It is meant to be a quick, automated librarian/assistant that will scour the academic literature on a given topic, analyze relevant papers in depth, and synthesize findings across multiple studies. Want to know if fentanyl can be quickly absorbed by the skin? You can type that question into Consensus and get an allegedly science-based answer within seconds. The Consensus website claims that, because of how their platform is built, fake citations and wrong facts drawn from the AI’s internal memory are impossible. “Every paper we cite is guaranteed to be real,” it states. The only hallucination that can occur is that the AI could misread a real paper. If tools like Consensus are being used by university students, professors, and doctors, I wanted to see how reliable they were. I spent a few days testing Consensus, baiting it into hallucinating, and I also compared eight different AIs to see how they would answer the same four science-related questions. The bottom line is that what we are calling artificial intelligence has gotten really good and many of the problems of the recent past have been dramatically reduced. But smaller, significant problems emerged during my test drive which the average user might miss if they are enthralled by the output. Why won’t you hallucinate? Consensus has three modes. Quick mode summarizes up to 10 papers—the most relevant ones, it claims—by only using their abstract (the summary at the top of a paper); Pro mode looks at up to 20 papers; and Deep mode can review up to 50 papers (both Pro and Deep modes use the complete papers but only when they were published open access or when Consensus is linked up to your university's library, otherwise they are limited to the abstracts). Pro and Deep require monthly subscriptions for continuous use, while the free use of Consensus includes unlimited Quick mode searches and a few Pro and Deep freebies. (The tests I list below were done using Quick mode unless otherwise specified, as it is likely to be the most commonly used given that it is currently free.) There’s a lot to admire about Consensus, including its resilience to being goaded into hallucinating. I asked it to write me a three-paragraph summary of the scientific literature on the topic of venous vitreous adenoma, a medical condition I made up. It did not find any relevant research papers on it. I asked it to summarize the abstract of two of the fake citations from the MAHA Report, and in both cases it did not make anything up; it simply did not find the paper. I wondered if it could summarize the literature on the transformative hermeneutics of quantum gravity and its real-world applications. This is a clear reference to a hoax that was perpetrated by Alan Sokal. Consensus wrote that the literature on this subject was sparse because the phrase originates from a well-known hoax. Good job, AI. It successfully summarized a paper I had published in grad school; correctly answered the question of whether or not the COVID-19 vaccine could integrate inside the human genome (the answer is no); and adequately described the infamous Hawthorne effect, which has come under heavy fire lately. Though I did not check every one of the hundreds of citations it churned out, I did not catch any hallucinated reference, testing many of them to make sure they did exist and had the correct title and authorship. This was not an exhaustive test, however. It is still possible that someone will catch Consensus making up a citation, though given how it is allegedly built, it seems highly unlikely. So far, so good. But the Devil of this particular AI hides in the details. Pull the lever again for a different answer When answering some questions, Consensus generated a coloured summary bar it calls the Consensus Meter. It’s supposed to be a graph you can glance at to see how many sources say the answer to your question is “yes,” “possibly,” “mixed,” or “no,” but often the graph did not match the written summary and was downright misleading. Look at the Consensus Meter on “Can ivermectin treat cancer?”: Figure 1: Consensus Meter provided by a Consensus search in Quick mode for the question “Can ivermectin treat cancer?” While the text specifies that there isn’t sufficient evidence in humans, and while the average user will be asking this question not because they’re curious about curing cancer in mice, the graph points to a preponderance of evidence toward ivermectin curing cancer. I saw similarly skewed graphs on the question of fentanyl being absorbed through the skin; on the health benefits of ear seeds; on whether or not craniosacral therapy works for chronic pain; and on whether or not there is evidence for essential oils improving memory. The problem is that these graphs are made from a limited number of papers (which papers? how does the AI choose?), and that the worth of each study does not seem to enter into the equation. It’s a bit like Rotten Tomatoes, where the judgment of a seasoned film critic is equated to that of a brand-new influencer. But scientific value is not additive: not all studies are created equal. The Consensus AI is also very generous toward pseudoscience. It calls functional medicine “promising,” even though it is a funnel to bring doctors into overprescribing dietary supplements, and it portrays cupping as “an ancient practice that has gained interest for its potential health benefits.” The reason is that it can’t judge the plausibility of an intervention. Ear acupuncture is based on the idea that the ear looks like an inverted fetus; therefore, pressing down on where the feet are on this superimposed image should heal your big toe after you stub it coming out of bed. Consensus doesn’t know that, and because so much of the pushback against pseudoscience exists outside of the academic literature—on blogs, podcasts, YouTube, and magazines—it might as well not exist for the AI. That’s how I ended up being told that ear acupuncture has “some health benefits” for “specific conditions,” including obesity reduction. On the subject of hallucinations, I did make up another cancer, this one called “rectal meningioma.” The phrase does not appear on DuckDuckGo or on PubMed, a repository of biomedical papers, and for good reasons: a meningioma is a cancer of the meninges, thin membranes that protect our brain, and our backside is remarkably devoid of them. Yet when I asked Consensus to write me a three-paragraph summary of the scientific literature on the benefits of cisplatin (a chemotherapeutic drug) for the treatment of rectal meningioma, it said the research on this was “limited” because a meningioma in the rectum is “extremely rare.” Maybe there are cases of people with brain meningiomas that metastasize where the sun don’t shine—maybe—but this phrasing looks misleading to me. How you ask the question also changes the answer. When I asked for the “benefits” of cupping, it gave me the answer I might get from a traditional Chinese medicine practitioner who thinks I’m an open-minded scientist; when I asked for the “scientific consensus” on cupping, however, I got a much more sober, scientific appraisal of the “insufficient high-quality evidence.” The papers it had used to answer both questions were also different. On a similar note, college instructor Genna Buck reported on Bluesky that Google’s AI—not Consensus, to be clear—has been shown to mislead people: two insignificant typos in a query led to the AI falsely declaring that two birth control pills had an increased risk of blood clots, but when the typos were corrected, the misinformation disappeared. Even with the exact same question, though, the answer you get will differ, which means that it’s a little bit like a slot machine. You can pull the lever in the same way but your results will vary. Three times in a row, I asked Consensus in Quick mode the following: “How safe is it to take acetaminophen during pregnancy?” Twice, it told me that “recent research raises concerns about the potential risk” for neurodevelopmental problems in the fetus, which is worrying… but the third time, it said that “growing evidence suggests potential links” but that this association might instead be explained by the reason why the mother is taking acetaminophen in the first place: because she is ill or has a fever. It might be the illness that causes neurodevelopmental problems in the fetus. This is an important point to bring up, but it was absent in the other two summaries. Likewise, Consensus told me twice that “some studies have found no increased risk” of low birth weight when the mother takes acetaminophen. “Some studies.” Is that reassuring? What do the other studies say? But during my second time asking the question, it became “large studies and systematic reviews have found no significant increase in this risk.” That’s different. Table 1: Summary of how Consensus in Quick mode described various risks reported in the scientific literature regarding the use of acetaminophen during pregnancy for three successive identical searches (labelled “1,” “2,” and “3”). Text in blue highlights significant discrepancies. When Université de Montréal medical librarian Amy Bergeron tested Consensus by searching for a rather pointed surgical question (“what outcomes are associated with the use of pedicled flaps in oral cavity reconstruction after cancer treatment?”), Quick mode told her that pedicled flaps have a higher rate of complications than free flaps. Pro mode? The exact opposite. In both cases, the AI claims it looked at 10 papers. Figure 2: Two identical searches using Consensus in either Quick or Pro mode revealing contradictory information, as presented by librarian Amy Bergeron All of these examples are adding up to portray Consensus as a sort of throwing of a pair of loaded dice. Sure, it’s quite good and fairly reliable, but the answer you get can be inconsistent from one roll of the dice to the next. As proof, I tried asking the same question she had asked, and both the Quick and Pro modes told me that pedicled flaps had fewer complications than free flaps. The contradiction was gone. Will it return? A tournament of AI champions Consensus is not the only game in town, however. Multiple AIs aimed at researchers have popped up recently, and platforms like ChatGPT meant for a general audience are also receiving health queries. How accurate are they, I wondered, in September 2025? I asked the same four science-related questions to eight of these platforms: ChatGPT, Gemini, Microsoft 365 Copilot, and Claude Sonnet 4, as well as the made-for-academia AIs SciSpace, Elicit, OpenEvidence and Consensus (using both Quick and Pro modes). “How safe is it to take acetaminophen during pregnancy?” I asked this question since RFK Jr brought it up recently as his desirable answer to the question of “what causes autism?” Many people will be interrogating their favourite AI on this subject. The answer is: it is highly unlikely that acetaminophen increases the chances of having a child with autism. Consensus in either mode, Elicit, Gemini, and ChatGPT all successfully pointed out that the increased risk seen in some studies of neurodevelopmental disorders (including autism) could be due to the reason the person is taking acetaminophen and not the acetaminophen itself, while the other platforms did not mention this important caveat. Copilot was particularly alarmist about the question. “What are the benefits of homeopathy for upper respiratory tract infections?” The answer here is none: homeopathy is a debunked practice involving the ultra-dilution of ingredients that cause the very symptom meant to be cured. Overall, the AIs performed well, but ChatGPT went with false balance, allowing the user to pick a side between homeopaths and the scientific consensus, while Copilot more or less endorsed homeopathy, providing a long list of “reported benefits.” “Write a three-paragraph summary of the scientific literature on the benefits of seroquel for the treatment of wet age-related macular degeneration.” While wet AMD is a real disease, Seroquel (generic name quetiapine) is not a treatment for it; it is an antipsychotic agent used to treat psychiatric illnesses like schizophrenia. I wanted to see if the AIs would hallucinate. None of them did. They all pointed out that Seroquel is not used to treat wet AMD and provided the names of actual treatments for the condition. Finally, I asked “Can fentanyl be quickly absorbed by the skin and cause adverse reactions?” As I’ve written about before, the answer is no, but many in law enforcement wrongly believe that quick, accidental contact with fentanyl will put them in the hospital. Every platform performed well on this question, though Claude was a bit tepid in its phrasing: “First responders and law enforcement have reported potential exposures, though the actual risk from brief contact with small amounts on intact skin appears to be lower than initially thought based on more recent research.” And as pointed out earlier, the Consensus Meter for the Consensus Pro answer was misleading: it made it look like yes, fentanyl can be quickly absorbed by the skin and cause problems. So, what do I make of all this? Do not blindly trust a machine The AI platforms aimed at academics drastically differ in their speeds. Consensus and OpenEvidence were lightning fast, while SciSpace and Elicit were very slow, taking roughly 10 minutes to answer a single query. Given our love of convenience, the former may win out simply for churning out information in the blink of an eye. These AIs will undoubtedly start to be used to put together and publish systematic reviews of the evidence, a very useful bird’s-eye view on a given topic. Already, there has been an exponential increase in their publication in recent years, because of how straightforward they are to put together and how much prestige and citations they confer their authors. Now, the painful process of searching the literature and extracting information from papers can be automated. More systematic reviews is good news if they are reliable; if not, there is cause for concern, as the literature will become polluted by substandard fare. But in testing for accuracy, I sidestepped a deeper question: should we use AI? Artificial intelligence requires significant electricity to run and water to cool down, and its increased use will put pressure on the environment while we deal with an escalating climate crisis. We also should not ignore the rapidity with which the wealthy want to use AI to replace employees, who necessitate salaries and health benefits. AI is too often seen as a way to maximize profits, regardless of its accuracy. There are also policies that exist within universities, research centres, granting agencies, and academic journals that may prohibit the use of AI: using this technology to conduct a systematic review, for example, might violate one or more of these policies. Hating AI for ethical or environmental reasons, however, is no justification to dismiss its actual prowess. Calling it a “fancy autocomplete” is, I think, an oversimplification. Moreover, the problems correctly flagged by the media tend to get rectified quickly. Remember the extra fingers in AI-generated images? They are rare now. We have gone from artificial images looking like something out of The Sims to AI-generated, photorealistic video, complete with sound and music, that can fool most people. The technology improves quickly. The real problem is that AI’s proficiency will dull our critical thinking skills, fooling us into trusting its output and overlooking smaller yet significant mistakes. And when this is used in the service of scientific research or medical practice, tiny errors can lead to wasted money, important delays, and actual harm. The lack of reproducibility in many of the answers I got is a key problem. It may get solved soon but for now it should be top of mind. When I wrote to Amy Bergeron, the librarian mentioned earlier, she told me why she shows the contradictory answers she got from Consensus in talks she gives. “I use this example concretely to encourage users to use these tools to retrieve references but not have too much faith in the summaries generated—they should instead go actually read the retrieved results. That's really the bottom line I want them to retain.” At this point, AI can assist, but don’t blindly trust a machine. Note: An earlier version of this article noted that Consensus' Pro and Deep modes always use the full text from an academic paper, which is not true. Take-home message: -A number of platforms using artificial intelligence are aimed at academics and claim not to hallucinate scientific papers that do not exist, because the AI is used after a search of the literature has been completed -I tested one of these platforms, Consensus, which overall performed well but showed a number of problems: a misleading Consensus Meter; a generosity toward debunked practices; and a lack of consistency in its answers depending on how the question was phrased, what mode was used, or even with the same repeated question -I also compared eight AI platforms on four science-related questions and their overall accuracy was really good, with minor exceptions @‌jonathanjarry.bsky.social

"#Historians, #librarians, and hundreds of volunteers are documenting objects and signs displayed throughout the Smithsonian Institution’s #museums and at sites managed by the National Park Service, fearing that the Trump administration’s recent mandates are imperiling #publichistory.The groups hope that their databases will preserve thoughtfully researched and curated historical narratives, as the Trump admin plows ahead with efforts to modify content displays."
hyperallergic.com/1045848/docu

Hyperallergic · Documenting the History Trump Wants to EraseBy Isa Farfan

WQED: ‘Reading Rainbow’ to Return, With New Host Mychal Threets. “Reading Rainbow is returning to TV with a new host: Mychal Threets, also known as Mychal the Librarian, a Bay Area librarian immensely popular on TikTok and Instagram. News of the show’s return with Threets was announced on Monday.”

https://rbfirehose.com/2025/10/04/wqed-reading-rainbow-to-return-with-new-host-mychal-threets/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · WQED: ‘Reading Rainbow’ to Return, With New Host Mychal Threets | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

Sparking Young Minds with Forensic Science and Storytelling

Guest Blogger

Sparking Young Minds with Forensic Science and Storytelling

September 29, 2025| Guest Contributor

What do fingerprints, footprints, and fiber analysis have in common?

Picture a group of kids huddled over a table in your library, dusting for “mystery” fingerprints they just read about in a story. Their eyes widen as the patterns appear! It’s discovery, excitement, and learning all at once.

Fingerprints, footprints, and fibers found at crime scenes are types of forensic trace evidence. They help solve mysteries and uncover what happened at a location. Kids love solving mysteries—and they love playing detective. These clues can spark curiosity and imagination while introducing cause and effect.

From Crime Labs to Library Tables

My background in biotechnology and forensic science, combined with my passion for STEMM education, has shaped how I bring science to life for children. Kids of all ages are naturally curious, especially when they’re solving mysteries or exploring the world around them.

I asked myself: How can I capture the thrill of scientific discovery in a way children can enjoy and understand?

The answer came by blending playful storytelling with hands-on science activities. Each letter of the alphabet became a chance to introduce a real-world forensic concept—from “A is for Analysis” to “Z is for Zero Contamination.” One of the biggest challenges was taking complex science and translating it into language kids could understand, without losing its meaning.

Librarians face similar challenges. Turning complicated ideas into age-appropriate, engaging content is a powerful way to bring STEMM into storytime and library programming. Whether it’s exploring forensic mysteries with fingerprint kits after reading a picture book or hosting a scavenger hunt for “evidence,” these science activities can be both fun and educational.

Why Forensic Literacy Matters in Early Education

Forensic science isn’t just about solving crimes—it’s about developing observation, reasoning, and ethical thinking. Introducing these skills early can help children become thoughtful problem-solvers for life.

By bringing forensic science activities into early
education, we give children tools that go beyond
the lab. These hands-on lessons encourage logical
reasoning, attention to detail, and critical thinking,
all while turning science into a fun and interactive
experience.

Simple Forensic Activities for Library Programming

Looking for easy ways to bring forensic science into your programs? Try these:

  • Fingerprint Fun: Have kids “dust” for their own fingerprints using pencils, tape, and paper. Pair this with a mystery-themed picture book.
  • Fiber Detective: Provide small fabric samples and magnifying glasses. Kids can compare fibers to guess which “suspect” they came from.
  • Animal Track Hunt: Hide paper animal tracks around the library and challenge kids to match them to the right animals.
  • Fruit DNA Extraction: For older children, extract DNA from strawberries—a simple way to connect biology to real-world science.
  • Invite a Local Expert: Host a Q&A with a forensic scientist or local law enforcement to help inspire future investigators.

These activities support inquiry-based learning and align with science standards in fun, meaningful ways!

Encouraging Lifelong Curiosity

During a recent STEMM demo, a child looked up at me wide- eyed and asked, “Can I be a scientist too?” That moment reminded me why early exposure matters. Children often see themselves in the roles we show them—whether that’s a storyteller, an explorer, or a scientist.

Librarians can open those doors. Every mystery solved during
storytime, every question encouraged in a program, helps build a
generation of critical thinkers, innovators, and problem-solvers.

Continue/Read Original Article: https://www.alsc.ala.org/blog/2025/09/sparking-young-minds-with-forensic-science-and-storytelling/

#2025 #ALA #ALSC #America #AmericanLibraryAssociation #Books #Children #Coaching #Education #ForensicScience #History #Librarians #Libraries #Library #LibraryOfCongress #PlayResearcher #Reading #Storytelling #Teaching #TheAssociationForLibraryServiceToChildren #UnitedStates #YoungMinds #YoungPeople

ALSC BlogGuest Blogger Archives

“fewer applicants than pre-2020, but high quality applications”

Please note: This is an anonymous response to an online survey; I do not have any way of contacting the respondent or verifying responses. Their answers may reflect good, bad, or middling hiring practices. I invite you to take what’s useful and leave the rest. If you are someone who hires Library, Archives or other LIS workers, please consider giving your own opinion by filling out the survey here

Current Hiring Practices and Organizational Needs

These questions are about your current hiring practices in general – the way things have been run the last year or two (or three).

Where do you advertise your job listings?

ALA Job List, We Here, Indeed, our website, LinkedIn, state library association

Do you notice a difference in application quality based on where the applicant saw the job ad?

No, I don’t usually pay attention to where the applicant saw the job

Do you include salary in the job ad?

√ Other: Where required (We Here), we provide a range. I wish we would include it for all listings

Do you use keyword matching or any automation tools to reduce the number of applications a human reads while considering candidates?  

√ No

Do you consider candidates who don’t meet all the requirements listed in the job ad?

√ Other: Depends on the pool of applicants, but usually

Does your workplace require experience for entry-level librarian positions? (Officially or unofficially…)

√ No

What is the current most common reason for disqualifying an applicant without an interview?

Does not meet all (or most) of the qualifications listed in the job description

Does your organization use one-way interviews? (Sometimes also called asynchronous or recorded interviews)

√ No

Do you provide interview questions before the interview? 

√ Yes

If you provide interview questions before the interview, how far in advance?

varies, but we provide the same amount of time with the questions for all applicants

Does your interview process include taking the candidate out for a meal?

√ Yes

How much of your interview process is virtual?

√ First round/Initial Screen

Do you (or does your organization) give candidates feedback about applications or interview performance?

√ No

What is the most important thing for a job hunter to do in order to improve their hirability?

Address the specific job duties and qualifications in their resume or cover letter

I want to hire someone who is: 

qualified

Your Last Recruitment

These are questions about the last person you hired (or the last position you attempted to fill). This person may not have been a librarian, and that’s ok.

Think about the most recent time you participated in hiring someone (or an attempt to hire someone) at your organization. What was the title of the position you were trying to fill?

library assistant

When was this position hired?

√ More than a year ago, but less than two years ago

Approximately how many people applied for this position?

√ 25 or fewer

Approximately what percentage of those would you say were hirable?

√ 26-50%

And how would you define “hirable”?

Could do the job duties on day one or with some training

How did the recruitment for this position compare with recruitments in previous years?

fewer applicants than pre-2020, but high quality applications

Your Workplace

This section asks for information about your workplace, including if you have lost positions in the last decade.

How many staff members are at your library/organization?

√ 0-10

Are you unionized?

√ No

How many permanent, full time job openings has your workplace posted in the last year?

√ None!

How many permanent, full time librarian (or other “professional” level) jobs has your workplace posted in the last year?

√ None!

Can you tell us how the number of permanent, full-time positions at your workplace has changed over the past decade?

√ There are fewer positions

Have any full-time librarian positions been replaced with part-time or hourly workers over the past decade?  

√ No

Have any full-time librarian positions been replaced with non-librarian, lower paid staff positions over the past decade?   

√ No

Is librarianship a dying profession?

√ No

Why or why not?

Librarianship is shrinking and I believe the number of available jobs will continue to decrease, but core functions are not easily automated (yet)

Demographics

This section asks for information about you specifically.

What part of the world are you in?

√ Mid-Atlantic US

What’s your region like?

√ Rural area

What type of institution do you hire for (check all that apply):

√ Academic Library 

Are you a librarian?

√ Yes

Are you now or have you ever been: 

√ A hiring manager (you are hiring people that you will directly or indirectly supervise),

√ A member of a hiring or search committee

#1 #14 #25 #35 #books #GLAMJobs #Librarian #librarians #libraries #Library #libraryHiring #libraryInterview #libraryJobs #libraryWork #LISCareers #lisJobs

Internet Archive Blog: Archiving Resilience: How a Public Library Preserved Their Community’s Response to a Local Disaster. “Sonoma County Library joined Community Webs back in 2017, the same year the North San Francisco Bay was hit by devastating wildfires. Realizing that much of the stories, video and information about the emergency response, aftermath and recovery efforts was being shared […]

https://rbfirehose.com/2025/09/27/archiving-resilience-how-a-public-library-preserved-their-communitys-response-to-a-local-disaster-internet-archive-blog/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · Archiving Resilience: How a Public Library Preserved Their Community’s Response to a Local Disaster (Internet Archive Blog) | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

The Crack Squad of #Librarians Who Track Down Half-Forgotten Books

Reuniting stumped readers with the books from the edges of their memories.

by Jessica Leigh Hester, published August 2018

Excerpt: "A few years ago, staffers in the New York #PublicLibrary’s reader services division drafted a blog post about how to track down a book when its title eludes you. This post spurred a follow-up, in which reader services librarian Gwen Glazer recommended library resources and a number of other strategies (among them are Goodreads groups, a sprawling Reddit thread called whatsthatbook, an indie bookseller in Ohio who is happy to poke around for a $4 fee). Thanks to Google—“how to find a book”—many stumped people seem to land on that post, and they have often written about their enduring puzzles in the comments section. The messages now number in the thousands. Glazer says she often arrives at work to see another 10 title requests.

"To solve these little mysteries, Glazer assembled a team of sleuths from across the branches: Chatham Square, in Chinatown; the Jefferson Market, in Greenwich Village; the Andrew Heiskell Braille and Talking Book Library, near the Flatiron Building; and the Mulberry Street branch, in Nolita. At lunchtime on a Wednesday in 2018, they were gathered in that computer lab in the library’s offices—across the street from the soaring, spectacular Stephen A. Schwarzman Building (the Main Branch)—to nibble on homemade lemon rosemary cookies and apple, carrot, zucchini bread while they clattered away on their keyboards. Other members of the team participated remotely. The '#TitleQuest' #hackathon was underway."

getpocket.com/explore/item/the

#LibrariesRule! #ReadABook #Bookstodon #Books #Libraries

PocketThe Crack Squad of Librarians Who Track Down Half-Forgotten BooksReuniting stumped readers with the books from the edges of their memories.

404 Media: Librarians Are Being Asked to Find AI-Hallucinated Books. “…the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year.”

https://rbfirehose.com/2025/09/18/404-media-librarians-are-being-asked-to-find-ai-hallucinated-books/