In January, I ran a reader’s survey, asking among other things for feedback from you on what Mel’s Desk content is most valuable. As it happens, an unexpected opportunity to step into a supervisory role at my library seriously derailed my plans to use the survey responses to refresh and refocus Mel’s Desk this year.
However, one thread about “best practices” from the survey has been frequently in my thoughts, even if I’ve been unable to address it yet here. When I asked, “What other types of posts or topics would you like to read?” One thoughtful reader said, “[About the] research underlying practices,” and followed that up with, “1) Make purpose or expected result of practice crystal clear 2) use comparative analysis to show a particular practice is best possible for purpose. We need stronger comparative analysis to know what is best in our practice.”
I’m not sure exactly what context this reader was thinking about in terms of practices–he or she may be thinking of programming, customer service, reader’s advisory, any number of things. But of course when I read it I thought of storytime.
As a storytime trainer I’ve been wrestling with the idea of best practices for a long time. Way back in 2010, after a round of storytime observations, I asked, “What aspects of storytime fall under “personal style” and what fall under “best practices”?” In other words, is there anything about a storytime performance that we can label objectively ideal?
This is exactly the kind of appealing idea that the survey reader raised–that we can test and measure different approaches to how we present our services, possibly including storytime.
I still go back and re-read the comments to that post as well as the ones to my follow-up post “What Not To Do,” because your thoughts were so valuable to me as I developed a set of storytime competencies to guide training and mentoring at my library.
As we worked out those storytime competencies, though, my boss and I realized that they depended completely on our vision of storytimes, not as a platonic ideal, but for our particular library. Our goals for storytime for our community impacted what we wanted our competencies–our “best practices”–to be. I talked about this, how various goals for storytime affect the assessment of storytime, in a 2012 post, “Storytime Questions and Storytime Goals.”
Now here we are in 2014! I am still completely invested in the idea that storytime practices–for any storytime, at any library–can and should be regularly assessed for areas of improvement. I believe this and I have also come to believe that given the great variety of libraries, staff, communities, and missions, that working to discover a universal set of “best practices” for all storytimes is not the best strategy for me to use to improve storytimes at my library.
As I start a new adventure with a dedicated team of storytime providers, I will be challenging myself not to think about “best practices” but instead about “intentional practices” and “best questions.”
Intentional practices: Given our goals for storytime and for our community, how can we be thoughtful and deliberate about every aspect of storytime so that we meet our objectives? As the survey reader said, “Make the purpose of the practice crystal clear.”
Best questions: What can we ask ourselves about planning, preparing, and delivering storytimes to clarify the impact of our practices on our goals?
I’ll be observing (and presenting!) even more storytimes than ever in my new job, and I promise to post the questions my team and I ask ourselves as we work to bring even more thought and intention to the work we do.
What questions do you ask yourself about your storytimes? How do you choose to be intentional about what you do in storytime?