The Browsing Collection display for the month of November includes a new feature, using a QR code to connect visitors, via their mobile device, to the tagged list of books included in the display. This is the first of several QR code implementations we’re currently in the process of launching at Marriott Library.

What is a QR code? A QR code is a two-dimensional, or data matrix code designed to be decoded at high speed by mobile devices and smartphones. Generally, QR codes are embedded with a URL, but they can also be used for anything from pushing out phone numbers, text and contact information to delivering RSS feeds and Google Places/Maps to a mobile device. QR codes are read using a QR code reader that has been downloaded to the mobile device. QR code readers use your mobile device or smartphone camera to decode the QR code and open the URL or display the associated information. You can find a list of readers and supported readers by mobile device and smartphone manufacturer here. QR codes are easy to create and there are many generators available free of charge. Most generators offer a statistics feature making tracking the frequency of use very easy.

Marriott launched experiments with QR codes because of the ubiquity of use of mobile devices in the library and on campus at the University of Utah. We are interested in providing our faculty, students and visitors with quick and efficient access to the information they want in the format they want it in. Currently at Marriott Library we are using QR codes to connect patrons to the catalog, browse course reserves, view classroom schedules, check library hours, get directions and call the Knowledge Commons desk.

In the future, we will be using QR codes to connect the physical with the digital by placing QR codes next to the artwork in the library’s permanent art collection. When scanned, the code will route to the collection online where there will be additional information related to the work. This is an interesting opportunity to provide our patrons with an immersive experience and is expected to be fully in place by June 2011.

QR codes were created in 1994 by the Japanese corporation Denso-Wave to serve as an inventory tool. The technology was used as an inventory tool for several years until a few years ago when the availability and use of mobile devices with internet access exploded. This development led to a new lease on life for QR codes which began to get adopted by advertisers and merchandisers as a cheap and efficient way to promote goods and services. QR codes are widespread in Asia, Australia, Europe and the UK. While still relatively new to the US, the use of QR codes in libraries is anticipated to be an efficient and effective way to address a variety of patron needs.

The potential for QR codes to play a significant role in education is beginning to take shape as well. For example, the University of Technology in Sydney (UTS), Australia, has created a poster demonstrating the application of QR codes in teaching, learning and research.

Publishers are also getting on-board by incorporating QR codes in print in order to connect the reader with additional resources, interactive forums and related video and audio, here is one such example.

We’ll be posting updates on the QR code experiment at Marriott. Do you have an idea for implementing a QR code that you’d like to share? We’d love to hear it!

In June 2010, the Washington State Historical Society, the University of Utah’s J. Willard Marriott Library and the University of Michigan Library all entered milestone records into WorldCat using the WorldCat Digital Collection Gateway. The 400,000th milestone record, a record titled, Golden-cheeked Warbler 1, was entered by the University of Utah’s J. Willard Marriott Library (www.worldcat.org/oclc/614416763 ). The record, Golden-cheeked Warbler 1, is a sound recording and part of the Western Soundscape Archive.

This is an interesting post from the Library of Congress Digital Preservation site:

The International Internet Preservation Consortium recently released a web archives registry. The registry offers a single point of access to a comprehensive overview of member web archiving efforts and outputs. Twenty-one archives from around the world are currently included; updates will be added as additional archives are made accessible by IIPC members.

In addition to a detailed description of each web archive, the following information is included:

  • Collecting institution
  • Start date
  • Archive interface language(s)
  • Access methods (URL search, keyword search, full text search, thematic, etc.)
  • Harvesting methods (National domain, event, thematic, etc.)
  • Access restrictions

The registry was put in place by the IIPC Access Working Group, which focuses on initiatives, procedures and tools required to provide immediate and future to access archived web material. The registry will also provide a basis for IIPC to explore integrated access and search in the future.

Preserving the web is not a task of any single institution. It is a mission common to all IIPC members, and many practices and lessons are transferable. The launch of the archive registry showcases international collaboration for preserving web content for future generations.

The IIPC was chartered in 2003 with 12 participating institutions. Today, there are over thirty-five member organizations. More information about the IIPC can be found at http://netpreserve.org.

At the Marriott Library, we’ve recently begun looking into what it would take to archive websites that are important to the University.  During some research into this area, I came across the proceedings of the 2009 International Web Archiving Workshop (IWAW).

An interesting project is taking place in France that may change the way web archiving is approached.   At University P. and M. Curie in Paris, researchers are developing a web crawler that will not only detect changes to a website but one that will be able to detect which changes are unimportant (changing ads on a page, etc.) versus which are important to the page’s content.  If successful, this might greatly improve the effectiveness of the web archiving system because digital archives would no longer be gumming up bandwidth and storage space with needless data.

This project is taking place in conjunction with the French National Audio-Visual Institute (INA).  The institute would like to archive French television and radio station websites.  The visual component of the institute’s pages is very important to the project, not just the content.

According to the workshop proceedings, the project idea is to “use a visual page analysis to assign importance to web pages parts, according to their relative location. In other words, page versions are restructured according to their visual representation. Detecting changes on such restructured page versions gives relevant information for understanding the dynamics of the web sites. A web page can be partitioned into multiple segments or blocks and, often, the blocks in a page have a different importance. In fact, different regions inside a web page have different importance weights according to their location, area size, content, etc. Typically, the most important information is on the center of a page, advertisement is on the header or on the left side and copyright is on the footer. Once the page is segmented, then a relative importance must be assigned to each block…Comparing two pages based on their visual representation is semantically more informative than with their HTML representation.”

The main concept and hopeful contribution to the world of web archiving is summed up by the presenters as follows:

• A novel web archiving approach that combines three concepts: visual page analysis (or segmentation), visual change detection and importance of web page’s blocks.

• An extension of an existing visual segmentation model to describe the whole visual aspect of the web page.

• An adequate change detection algorithm that computes changes between visual layout structures of web pages with a reasonable complexity in time.

• A method to evaluate the importance of changes occurred between consecutive versions of documents.

• An implementation of our approach and some experiments to demonstrate its feasibility.

It will be interesting to follow up with this project as it reaches its conclusion and see how its results will affect current web archiving players like Archive-it.org as well as fellow research endeavors like the Memento Project.

You can read about this project in much more technical detail at the IWAW website (unless it’s been taken down and hasn’t been properly archived).

http://iwaw.europarchive.org/

County Map of Utah

Selected images of Salt Lake City from the Utah State Historical Society Shipler Photograph Collection

Open Access News
How the internet is transforming scholarly research and publication

More on U-SKIS
By Peter Suber

Anne Morrow and Allyson Mower, University Scholarly Knowledge Inventory System: A Workflow System for Institutional Repositories, Cataloging & Classification Quarterly, 47, 3-4 (2009) pp. 286-296.

Abstract: The University Scholarly Knowledge Inventory System (U-SKIS) provides workspace for institutional repository staff. U-SKIS tracks files, communications, and publishers’ archiving policies to determine what may be added to a repository. A team at the University of Utah developed the system as part of a strategy to gather previously published peer-reviewed articles. As campus outreach programs developed, coordinators quickly amassed thousands of journal articles requiring copyright research and permission. This article describes the creation of U-SKIS, addresses the educational role U-SKIS plays in the scholarly communication arena, and explores the implications of implementing scalable workflow systems for other digital collections.

PS: Also see our past posts on U-SKIS.

(it is humbly suggested the appropriate background tune for this title would be “Riders on the Storm” by The Doors…)

Last week at the Utah Library Association annual conference Tracy Medley and I gave the presentation “Pilots on the fringe: flickr as a tool to promote digital collections.” Our curiosity with putting objects from the digital library into social networking spaces was first piqued by the hugely successful pilot project the Library of Congress and flickr started in January 2008. Having a few thousand photos with little descriptive information, LOC had teamed up with flickr to make the images available as part of an experiment in social networking. LOC continues to add 50 new photographs each Friday to the collection.

We created a pilot project similar to LOC’s in Fall 2008. Over the coming months it was surprising to see that such a small collection, with only a couple hundred images, would account for such huge statistics. In the roughly 6 months since launching the Marriott Library flickr Collection there have been over 9400 views of the images. The success of the pilot was so sudden, and off the charts, that whole new sets of questions quickly arose—how should we fold this into digital production? How much of any given digital collection should we add to flickr? What other avenues (both inside and outside of flickr) should we develop?

The equivalent to flickr for photographs being YouTube for video, led us to wonder if putting video highlights from our Moving Image archives in YouTube would garner similar interest. In April 2009 we launched a Marriott Library Channel. An experiment still in its infancy, videos receiving a fair amount of traffic so far are footage from 1968 of Robert and Ethel Kennedy at Salt Lake City Airport, as well as footage from the 1934 University of Utah vs. Utah Agricultural College (Utah State) football game.

Selecting material for YouTube and flickr that is likely to have a high degree of interest, tagging the content so that it is optimally discoverable, and getting the word out about the collections, using both traditional and Web 2.0 applications, will be important priorities. A key component of Web 2.0 is in its integrative nature; fully exploring a tool like flickr or YouTube means using it in context with other Web 2.0 applications like facebook, twitter, WordPress, Yahoo! Maps and delicious.

Predicting which of these Web 2.0 applications will have ‘legs’ in the future is a bit like being both a weather forecaster and soothsayer, we can be reasonably certain of possible outcomes, but in the end we are still making a prediction. While some of these experiments ultimately may not have longevity, the pilots are demonstrating there is measurable interest and appeal in making available, in a variety of Web 2.0 venues, aspects of a library’s services and collections…you might think of it as a sort of cyber-bookmobile.

© 2012 Marriott Library Blog Suffusion theme by Sayontan Sinha