Saturday, November 27, 2010

Saturday, November 20, 2010

Readings for Wk #11

Web Search Engines, Pts. 1 & 2 / David Hawking

These articles explain how search engines use crawling algorithms to search and index the web. In part 1, Hawking describes how crawling machines are assigned to specific URLs via hashing. If a crawler comes across a URL that is not assigned to it, it then sends it to the correct crawler. Indexers first scan and then sort documents containing specific words and phrases. Like crawlers, indexers are also assigned to specific URLs to manage the volume documents that will be analyzed.

Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting / Shreeves, Habing, et al.


OAI-MPH was created to facilitate access to online archives via shared metadata standards. These shared standards allow users from different organizations or users of different systems to easily share resources. The involved repositories use metadata standards such as Dublin Core, XML, etc. In the future, OAI-MPH will work towards improving its registry to be more searchable and providing better descriptions.




The Deep Web: Surfacing Hidden Value / Bergman

When performing an internet search, a typical user is only scratching the surface of the web. According to Bergman, most are getting just 0.03% of what is actually available. The "deep web" is the other 99.97%. Many of these sites are made up of company/business intranets, specialized databases, archives, & repositories. This article is ten years old and I wonder how much of this information has changed because of the sophistication of current search tools. I do believe parts of the web remain "hidden" but they're not as inaccessible as they once were.


Comments from Wk #11


http://nrampsblog.blogspot.com/2010/11/unit-11-web-search-and-oai-protocol.html?showComment=1290312661547#c4758384399719463169

Monday, November 15, 2010

Saturday, November 13, 2010

Comments from Wk #10

https://lis2060notes.wordpress.com/2010/11/06/reading-nov-15/#comment-20

http://maj66.blogspot.com/2010/11/week-10-readings.html?showComment=1289710758965#c2196093730956100326

Readings for Wk #10

Digital Libraries: Challenges & Influential Work / Mischo

(This article really made me appreciate how far we've come in digital libraries technology. When I started college in 1991, I had to do a research paper for sociology class. I spent 3 days researching my subject in two different libraries and then had to make a 15 minute appointment to have access to a specific database. It's amazing what has happened in just 20 years.)
Mischo  gives a brief history of digital library projects and why they were developed. Digital libraries were created out of the need to make large amounts of information housed in several different places/systems easily accessible via simpler portals. Like most projects of this scale involving and effecting several fields, it was primarily funded by the government and launched at a few select university libraries. The most surprising thing to me was that the early stages of this project were undertaken during the early age of the WWW. Thanks to this group of developers, programmers, engineers, and libraries, anyone can just visit ProQuest, Muse, or Google Scholar to download books, articles, etc. on almost any subject instead of visiting 3 different libraries to use specialized machines or databases.


Dewey Meets Turing: Librarians, Computer Scientists & DLI / Paepkce, Garcia-Molina, Wesley 


This article explores the mostly harmonious relationship between librarians and computer scientists in the context of Digital Library Initiatives. Working together on this project made sense in so many ways initially because both understood the need to build collections that could "search[ed], organize[ed], and browse[d]." However, with the rise of the Web, both groups had to adjust their thinking on how to implement many of their goals. Computer scientists were naturally drawn to the breakthroughs made possible by the Web (machine learning, links everywhere - not just local, etc) while librarians had to grapple with higher prices for online journal content. As this relationship has evolved since the early DLI projects, librarians and computer scientists have been able to learn from each other. Computer scientists have collected websites of similar topics into hubs. Librarians can now help these computer scientists manage their scholarly publications online.


Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age / Lynch


Institutional repositories are gaining popularity for several reasons: metadata standards have been implemented, the price of online storage is cheap, the high price of serials, & it promotes scholarship at the institutions. Lynch uses MIT's Dspace as a model repository that utilized open source software and corporate partnerships (in this case with Hewlett-Packard). While creating their own repositories can lower costs for institutions/libraries (cutting out contracting with other firms to handle digital storage), Lynch warns them to stay on mission. First, don't use the repository to control or impose ownership over student/faculty/researchers intellectual property. Lynch states that successful repositories "are responsive to the needs ...and advance the interests of campus communities and of scholarship broadly." Second he says that repositories can't be slowed down or burdened by heavy policies. Libraries, faculty, researchers must cooperate on making policies that don't advance one group's agenda over the others. Third, institutions must be committed to maintaining & funding the repository after it's established.

Sunday, November 7, 2010

Muddiest Point - Wk #9

I think I understand the reasoning behind using XML, but am not following the difference between DTD & XML schemas. Why are schemas better than DTD?

Saturday, November 6, 2010

Readings for Wk #9

Introducing the Extensible Markup Language (XML) / Bryan, Extending Your Markup / Bergholz, A Survey of XML Standards / Ogbuji, XML Schema Tutorial / W3

Extensible Markup Language (XML) is a "subset of Standardized General Markup Language (SGML)"  made to carry & store data. It is more flexible than HTML because the user can define their own tags, which makes it easier to share data across different languages & fields. Users don't need a particular version of  software to create documents in XML structure. XML governs the structure of a data & not, as HTML does, what that data will look like. XML is expressed in documents which are made up of entities which are made of elements. Elements are made of attributes. For some reason, this breakdown of the structure of XML documents make me think of second grade grammar lessons when we learned sentence diagramming. These articles (except W3) were difficult to get through because they assume the reader has some experience/background with SGML. I'll have to re-read & go through the tutorials a few more times.

Comments from Wk #9


http://marypatelhattab.blogspot.com/2010/11/week-9-readings.html?showComment=1289098825453#c5930458660809599655

Tuesday, November 2, 2010

Assignment #5

Here's the link to my Koha book shelf:
http://upitt01-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=79
It's titled "Solo Library Management" & covers a few of the books that have been a tremendous help with my job & some books I've been meaning to read. My user name is TRW37.

Monday, November 1, 2010

Muddiest Point - Wk #8

Is it necessary to know HTML any more since it seems as if most web editing software has HTML built-in?

Readings for Wk #8

W3Schools HTML Tutorial / W3Schools.com

HTML is the markup language used to describe websites. Before reading this, I knew enough about HTML to recognize it when I see it & to do some basic edits (change fonts/sizes, make lists, bold, etc) to an existing website. This site was very easy to follow - easier to follow than the many manuals & seminars I've read/attended on building websites. I especially liked how it gave space for real-time practice. 


HTML Cheat sheet /  Webmonkey


I've bookmarked this page for future reference. It'll be very handy for any web editing projects because although HTML is a good thing to be familiar with, no one is going to remember all the correct tags.




W3Schools CSS Tutorial / W3Schools.com

 Cascading Style Sheets (CSS) allows you to apply styles to many web pages at once, saving a huge amount of time and effort. (I'm a little more familiar with this because of a web project at work [which ate my life] last year.) CSS is perfect for large-scale projects, like creating a new website. Once you've decided the basic format & the elements you'll need for each of the pages on the site (i.e. background colors, fonts, sizes, etc.), you can save those elements in a .css file & use it to style your pages. 


 Beyond HTML / Goans & Leach


This article discusses one library's adoption of a content management system to manage their web guides. CMS can make web sites easier to manage and edit because prior knowledge of HTML is not necessary. For this particular library,CMS also allowed them to be more creative in tagging and customizing information to better meet the needs of their users. It was interesting to see that when the authors surveyed librarians and liaisons in their use of CMS, most respondents indicated that the "ease of use" was the deciding factor in choosing CMS.

Monday, October 25, 2010

Muddiest Point - Wk #7

I have no muddy point this week.

Comments from Wk#7

http://lostscribe459.blogspot.com/2010/10/week-7-readings.html?showComment=1288062582006#c1031361247638977850

http://acovel.blogspot.com/2010/10/week-7-reading-notes.html?showComment=1288062780182#c6721777066282778145

Readings for Wk #7

How Internet Infrastructure Works / HowStuffWorks.com

A very simple and easy to understand breakdown of how the back-end of the Internet works. It's something most of us use daily without thinking, it's helpful to know how my computer at home links up to the network. (This has been especially timely for me because, thanks to some heavy rain storms, my internet has been spotty or nonexistent lately. It's amazing how a few little broken wires in a junction box can kill internet connection for an entire neighborhood.)

Dismantling Integrated Library System / Library Journal

The article shows some of the challenges of adopting and managing integrated library systems. These systems form the back end of library management systems, including cataloging, circulation, serials management, and OPACs. They sometimes have to cover the gamut of specialized library functions as well: reserves, digital resources. As technology has grown exponentially over the years, ILS systems have had to adapt. Libraries have sometime had to create their own modules or systems to meet their changing needs because ILS vendors can't (or won't) deliver these necessary customizations within already stretched library budgets. I'm very sure that more libraries will be turning to open source ware or hybrid products to meet their needs.

Sergey Brin & Larry Page on Google / TED

It was interesting to hear more about the most popular search engine from its creators. I like their 20% rule rule: employees spend 20% of their work day doing whatever it is they think is best to work on. It's from these side projects their able to make big breakthroughs & innovation. Because they nurture their employees' passions, Google has been able to grow & they're still in the top10 most desirable companies to work for.

Assignment #4 Personal Bibilographic Management Systems

http://www.citeulike.org/user/tbm473

Sunday, October 10, 2010

Readings for Wk #6

Local Area Network & Computer Network / Wikipedia

These articles explained how computers can be connected to communicate & share resources among several users. There weren't really any sticky points in either article that I couldn't follow or understand.  One thing I didn't know before was that ARPANET was the first computer network. (I had thought it was CERN, with Tim Berners-Lee creating the Internet). Although my home has four computers, printer, game consoles, & a fax machine, I never really though of the whole thing as a personal area network.

RFID/ Coyle

I'm confused about this part of the assignment - didn't we read/discuss this earlier in the semester? Here's what I posted to the DB then:
 I do think that RFID can be useful in libraries only if the privacy concerns of the patrons are carefully addressed before implementing this technology. Libraries have a responsibility to protect the privacy of their users & make sure that they understand tools the library uses to safeguard their materials. The library should have a "plain English" privacy statement posted on their website & in other prominent places in the library. Reference & circulation staff should also explain the purpose of the tags to assuage the concerns of library users.
For library staff, RFID can help guard against theft or loss. It could also save many, many hours of staff time in collection maintenance. But, because of the high cost of implementation, many libraries can't adopt this technology.

Saturday, October 9, 2010

Monday, October 4, 2010

Comments from Wk #5

http://gvbright.blogspot.com/2010/10/week-5-readings.html?showComment=1286193936782#c64147597157649588

http://nancyslisblog.blogspot.com/2010/10/reading-notes-database.html?showComment=1286194905199#c3801220370052995325

Muddiest Point - Wk #5

Will adopting DCMI make library catalogs (OPACs) like Google (in ease of use)?

Readings for Week #5

(This week's reading gave me some pretty awful cataloging flashbacks - all of those acronyms came rushing back in a flood. AACR, MARC, & RDA, oh my! After a 2 year stint in cataloging, I knew that I was better suited for reference work. To all of those aspiring catalogers out there, I salute you!)

Database/ Wikipedia
This article was a fairly straight-forward explanation of what databases are (a collection of data), their varieties & how they're used. I've used with relational database management systems & software (Access, SQL), but I don't have that much experience with object DMS.

Introduction to Metadata / Gilliland & An Overview of the Dublin Core Data Model / Miller
Metadata is basically, "data about data." This term is used to describe how an object (or set of objects) is classified or found or managed in a specific setting/organization. In many academic libraries, Library of Congress Subject Headings (LCSH) is used, while some school & public libraries use Dewey Decimal Classification (DDC). A lucky few of us who work in special libraries end up creating special in-house systems.  Tagging in user-generated environments (blogs, wikis, etc) comes pretty close to what metadata is about. Having good metadata is enormously important for accessibility, whether you are in an academic, public or special library. Let's say you are searching for an Italian cookbook in your library. You've checked the online catalog & found the call number associated with "cooking, Italian" is TX723. In LCSH, TX means "home economics" and the number range 642-840 describes specific methods or varieties of cooking. If the online catalog has more complete records available for public view, you can also see author & publisher information, physical details about the item (format, pages, illustrations, photos), physical location, & other classification schemes. On the cataloger side, one can see all of that information as well as machine readable fields (MARC) that make that item accessible to external databases. The public can see Batali, Mario in the Author field, while a cataloger or external database can see his name in the 100 field represented this way: 100 1 Batali, Mario.
Unfortunately, as thorough as it is, LCSH (or any other cataloging scheme) can't possibly accurately describe data for every discipline. That's where the Dublin Core Metadata Initiative comes in. This initiative is proposing standardized metadata that can be used across disciplines, languages & also internationally. As you can see from my lengthy LCSH example above, each information community has their own language & terms that may not easily or ever translate to another community - even if they're describing the same objects. The Dublin Core Metadata Element Set is the basic set of fifteen components used for resource description, ranging from "creator" to "type."

Monday, September 27, 2010

Muddiest Point for Wk #4

I have no muddy points for this week.

Readings for Wk #4

Data Compression / Wikipedia & Data Compression Basics / DVD-HQ
These clearly written & easy to understand articles on data compression covered some basics I am familiar with:
  • Compression can help save bandwidth & disk space - very important for those of us with limited computing power. Compression allows you to be a better steward of resources.
  • In order for data compression to be successful, both the sender & receiver must understand how the information is encoded
    • You probably won't be able to play WMA music files on some MP3 players - the files will have to be converted first.
But the articles also raised some important points that I wasn't aware of:
  • Lossless compression - original data can be retrieved
  • Lossy compression - loses some of the data so it can be compressed, but the results are not the same as the original. This is like making a copy of  copy - the more copies you make of the copy, the less it looks like the original.
As we librarians digitize & share more information, we have to be sure we're using the right formats/techniques for our users.

 Imaging Pittsburgh / Galloway


This article emphasizes the importance of collaboration, planning, metadata. I appreciated how he explained all of the challenges (metadata, selection, website) involved in this project & how they arrived at solutions that were accepted by all collaborators. The resulting website is impressive & demonstrates how important projects like these are to not only libraries & historical societies, but also to the public.

Youtube & Libraries / Webb

This article on using Youtube for library marketing, promotion, & instruction is pretty timely for me professionally. In the near future my web committee will be creating a Youtube channel to promote our career center's services, events, & tips on career planning. Youtube (in addition to Facebook) is probably one of the best ways to get the attention of our target undergraduate audience - it's free, the students use Youtube already, & there are relatively few administrative hoops we'd have to jump through.

Monday, September 20, 2010

Week #3 Computer Software

Introduction to Linux
I'm a little familiar with this system, mostly because my husband is a tech-geek & he has re-programmed one of our old computers so it will run Linux. I've been too afraid I'd break something so I haven't gotten any hands-on experience with it yet. This article has almost convinced me to take it for a spin. I like the idea of an open source product that can be customized to fit user needs, yet it's not that easy to learn ,especially if you're not already a programmer or power user. Linux does seem like it could be the answer to some of my work-related tech issues.

Mac OS X
Mac have always struck me to be (Mac users, don't hate me) a bit "cultish." According to Apple & many Mac acolytes, Macs are safer, more intuitive, better for creative people & easier to use than PCs. To many PC users, Macs are difficult, not for serious programmers/hackers/techies, too expensive, & too trendy. As a long time PC user on a seriously strict budget, I mostly fall into the latter group. I liked the the author of the What is Mac OS X dealt with these differing opinions at the beginning of his article. I also appreciated how the author broke down the differences between OS X, Windows, & Linux while acknowledging the pros & cons of each system.


Windows Roadmap
This article was basically a heavy dose of Windows "Marketing Kool-Aid" alluded to in the What is Mac OS X article - it was annoying, to say the least. It was comforting to find they will continue to support XP until 2014. 

Friday, September 10, 2010

Muddiest Point - Wk #1

I have no muddiest point for this session.

Readings for Wk #2

I was pretty surprised to be reading Wikipedia articles for this class. Wikipedia has come a long way, but my inner old-school librarian was skeptical.  Maybe this to drive home the point that good information can be found in non-standard places?



Personal Computer Hardware / Wikipedia


This article didn't really offer anything new for me to consider - I was already aware of basic computer parts. It did bring up bad memories of the dreaded Iomega Zip Drive. That was my first and last attempt at early tech adopting. 


Moore's Law / Wikipedia

Moore's Law states that the number of transistors manufactured "at optimal minimal cost" doubles approximately every two years. There's been some conjecture over the years over exactly what time frame Moore really specified (18 months, one year, three years). People mistakenly think it refers to increased processing power - although it can be a nice side effect of smaller, more efficient chips that are developed. Although it's not actually a law, the technology industry seems driven to keep up with it, using it as a measurement of their progress. It also seems that every ten years or so, scientists or companies will say that the trend will expire ten years into the future. Some futurists believe Moore's Law will bring us to a technological singularity, a period of growth so great that it will usher in a new age of artificial intelligence & hybrid human/machines. I don't know if I completely believe that particular theory, but I do know that we've experienced exponential growth in technology Moore described over 40 years ago.


Computer History Museum
The first glance of that site was not that interesting to me - I wasn't sure what we were supposed to focus on. As I explored throughout the site, I found the exhibits page was surprised to find an exhibit dedicated to the Babbage (or Difference) Engine, which is considered to be one of the first computers. In 1821, mathematician Babbage had hoped to create an steam-powered calculating engine. While the Difference Engine proved to be a failure, Babbage salvaged some of it to work on the Analytical Engine - a true forerunner of computers. He was aided in expanding the purpose of his engine by Ada Lovelace, widely considered the first computer programmer. I was pleased to see her & her contribution mentioned, because women aren't represented very well on this site. Are contributions by women to computer history/technology so few, or are they just poorly represented in the field, or are they just given short shrift altogether?

Wednesday, September 1, 2010

Readings for Wk #1

Lied Library @ four years: technology never stands still / J.Vaughan
This case study of a large academic library's technological evolution provides a good overview of the pros & cons of adopting new technologies & the unique challenges it poses. No detail was left out: from larger system updates & changes to analyzing the printing costs. The one point the author raised and resonates with me (& probably most library/info pros) was "...the fact that so much information is available & expected online, 7/24/365, the times when the library is truly closed are fading away." Vaughan does not give exact dollar figures, but if a library of that caliber updates only  its computers every three years, even with a vendor discount, it will cost millions of dollars. Also, when the costs of e-journals, databases, software & accessories are added in, along with staff wages, how do libraries balance the need to provide the services/information their users want with the realities of shrinking budgets? I wish Vaughan had gone into more detail on that particular point.



2004 Information Format Trends: Content, Not Containers / OCLC
This trend report from OCLC covers the explosion of digital content & how it effects the basic nature & purpose of library work. Although this research is 5 years old, it's still interesting to mark the predictions that have come true in the intervening years. Information is now available everywhere, to everyone, all the time - with or without the help of libraries. If library users are as the article states "content consumers" & "format agnostic," what will that mean for us a future library/info pros? What does it mean for us now as content consumers who have a deeper understanding of librarianship?

Full disclosure: I love the internet & being online. I am a  consumer of online content: websites, blogs, social media, search engines - almost everything.  On the one hand, I appreciate the free exchange of ideas in many avenues. I feel people should be able to create or access or describe most information in any legal way they want. I don't necessarily want to consult LC for the "correct" subject headings if I post vacation pictures on Flickr or books on LibraryThing. But, on the other hand, (please feel free to disagree with me here) I mostly agree with the article & think it's our responsibility as library/info pros to really engage in these new arenas of information (without taking over) & assist with "synthesiz[ing it] into knowledge."


Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture / Lynch
The author sets out to address/define what information technology literacy is, how it should be taught, & how it should be used. The author also puts forth the idea of student-created simulations for better understanding. It’s not enough to know how to use the tools – one must also understand how the technology infrastructure works together with (or against) social issues. As technology changes so much around us, it’s more important that everyone understands how & why these systems work. The line between strict technician & average user is blurring, so our jobs may be (as stated in the previous article) to help others navigate these systems. But my question is how? The author doesn't give many suggestions for this at all.

Intro

Hi LIS2600 classmates - welcome to my blog! My name is Tracy Wallace & I'm happy to be part of FT Cohort 10.5. I'm originally from Philadelphia, but I attended & now work for Penn State University (main campus). I have a BA in English, mostly because I love to read (naturally). I've held just one full time job in my adult life that wasn't library-related (receptionist - I was terrible) & after many years of flirting have decided to make it official by getting an MLIS. Of the library jobs I've held, I really enjoy my current job as the solo library assistant for the career center. Every day I have the opportunity to help students find & make sense of the information they need to get to the next steps in their lives. It's especially nice when they call/email with success stories or just to say thank you. Right now my goal is to explore some non-traditional paths in librarianship.