Oh, good…

Backchannel.com has a post on legibility in web design – mainly, it’s getting worse, intentionally.

Oh, good. I thought it was just me.



Stickiness and Stupidity

Time for another rant. RANT MODE=ON

I’m seeing web designers for vendors becoming obsessive about sucking people into their systems, getting their contact info, lining them up for a followup, checking the best time of day for a phone call from a rep… oh, you just had one simple question?

Sucks to be you, then, because we’re squeezing every bit of information possible about you into our system so we can sell you something… oh, that’s what the question was about, you just wanted a quote?

Still sucks to be you. You ain’t getting zip until we wind you around the web site a few times and then suck all that juicy info about you into our files.

Which probably don’t have all the much security protection so we’ll get hacked and …

Which allows us to flood you with spam about our products, relevant or not, affordable or not…

Yep, still sucks to be you, customer.

Nobody wants to tell you anything until they get all that contact information out of you and put you on a list of spam advertising which — oddly enough — never seems to pay much attention to the detailed ‘interests’ list you had to fill in just to continue in the process of actually emailing a rep about your one simple question.

Now, I realize that making a web site ‘sticky’ so people stay on it, and getting info for followup, is one of the priorities that web designers are given these days. That has most likely been handed down by somebody who read a post or even a book on how to sell on the web, and the idea of squeezing blood — eh, information — out of the stone — eh, potential customer — was laid down as a new law of marketing.

A little heads up for marketing people:

  1. I’m busy. I don’t have time to set up a login for every site and vendor on the web, and keep track of the passwords and such (even with LastPass, it adds up). Actually, you get my email and/or phone as part of the deal if you just let me ask my question. Or is it too hard for you to extract that, or have the rep enter it into your system for you?
  2. Answer a simple question and I am happy. Jerk me around with sites designed to suck all the info you can out of me and I go looking elsewhere. Oh, you’re the sole source? Well, maybe I can talk the faculty into finding somebody more cooperative to buy something else from.
  3. Chances are, by the time I ever use this login again, your entire web architecture will have changed and invalidated it all. I have had as many as five invalid logins for one vendor, who kept ‘improving’ their services and forcing me to ask for a reset.
  4. I have a delete button. So all the email you spam me with gets deleted. I can even fix it so it gets deleted automatically. So at least go to the trouble to make it relevant, or I’ll block you. You insisted on putting me on a list, even when I unchecked the box for that, so don’t expect me to be nice.

The Internet and World Wide Web is amazingly flexible and customizable. Remember that and don’t be slaves to some general concepts about what somebody thinks you should do to market. Think of everyone from the customers side.

Oh, and try using your own website like a customer would. Often, like at least after every change you make. A lot of the time, you broke the site. Just sayin’.

I am librarian, see me vent.



Adding a Discovery service

Been a busy spring.  Moved into our expanded building, changed the library’s URLs due to a domain change, and beginning July 1, we now have a discovery service.

We went with Ebsco, which at this point is not necessarily an endorsement, but it looked like a good bet.  We exported MARC records for our catalog holdings, and data for our online services (if they cooperated).  There’s actually several months of preparation to step through, and I handled only a small part of it.  Our Periodicals Librarian did most of the rest, on our side.

The idea is to have everything possible in one search result.  Users can limit it (full text only, or peer-reviewed only, etc.) and/or to types of materials (articles only, for example).

A recent article contends that discovery tools are a bad idea for new researchers, and the author makes some good points.  The catch is, as I see it, that this is based on trying to achieve an ideal rather than a realistic situation.  With all due respect, we don’t have an ideal situation where we get to sit our students (or even faculty) down and step them through all the proper procedures to make them optimal researchers.  Frankly, I suspect that even if we did, it wouldn’t matter that much (or maybe a little for faculty).  They’d take the easy way because it’s easy and fast and doesn’t require them to think as much.

This is also important because our students (and sometimes faculty and staff!) may have plenty to deal with just learning the use of computers and software, never mind the intricacies of “proper” searching.  Tell somebody who barely gets in and out of various programs on their new laptop that they need to do things the harder but better for them way in searching, and… I’m not that brave.

In our real situation, we’re lucky if we can get students in a classroom long enough to teach them how to research this specific assignment just before they go to work on it.  So, anything we can do to — at the very least — get them to actually realize that there are many resources which are available, is a useful achievement.

I’m going to be interested to see if this actually increases use of resources just by showing all of them (well, most of them) in one search.  I suspect most users would much prefer to use one search, and if that happens to exclude a lot of resources, well, too bad.  If most things are there, however, the options widen considerably, and I’d rather teach how to narrow options than try to get people to try several different methods to search for different resources.  The former is optional, while the latter just doesn’t happen as often.


New look for CH

I’ve updated to a new theme for Computer Helpers.

I went with Fadtastic, which is a brighter, lighter, less rigid look.

It also gives me more flexibility than the Andreas04 formerly used.

My one big concern is that the links are in a light green, but since you get a black-on-white label when you mouse over them, it may not be that bad.  It also converts some of the color screenshots in my own pages to black-and-white, but that’s not a big deal.

I’ve added an email subscription option, and Recent Posts, along with the RSS feed and an automated add-to-RSS for the popular feed readers (including my favorite, Sage in Firefox/Pale Moon).

The header still isn’t fancy, but it is very clear.

I tried a tag cloud, but it doesn’t look very good in a one-column of three format (sort of muddled), and I still have reservations on how useful they really are, especially given the setup of the blog and the search box.

I hope everyone considers it an improvement.

Followup on the new website

More on the website revision, in part to remind myself what I did and why, and perhaps something here will be of use to others.

So, I’ve been converting our Innovative Interfaces catalog pages on our libcat server to a format as close to the new library.uafortsmith.edu server pages as possible.

I’ve wanted all along to make the transition as seamless as possible — ideally (IMHO) most users won’t realize they’re moving back and forth from one server to another.

Due to the wiki template set up on the campus website (and therefore the old library pages), however, we elected not to do that and allowed variation on the catalog pages.  Now, with the new server, I’m striving for identical looks.

Little things crop up in the process, of course.

Date Script

JavaScript for date in the header is one I borrowed some years back for use in the catalog.  We are finally able to use it on the regular home page now that we are on a server that allows it.  I like it because it specifically states what today is (day of the week and date) and the hours today only.  There is also now a link to all our hours on a separate page.

When we’re closed, it says so for that day, but I wanted the ‘closed’ message to remind people that online services are still available: “Closed Today (Online Services Still Available)”.  The catch was, I had that message on one line replacing the ‘hours’ text for that day, and it’s a much longer piece of text.  It threw everything off in the header — kicked everything right off that over and some of it wrapped onto additional lines, which looked terrible.  So, I broke the text display for the ‘closed’ message into two lines and it works neatly now: “Closed Today<br />Online Services Still Available”.

III uses “tokens” which are shortcuts to scripts in their system that handle certain tasks.  These work somewhat like SSI (Server Side Includes).

For example, for much of the top header in catalog pages, I can use “toplogo” which calls up a separate HTML partial file to fill in a stock header.  Same for “botlogo” which fills in a stock footer section.  Very handy.

So, I linked the toplogo section in the catalog to run the same JavaScript for the date/hours info from the library server, so I only have to change it one place to update it (changes in hours, holidays, etc.).  Very handy.

Advanced Searching

The catch to the above is, the header in ‘toplogo’ included the same catalog search box that is on all the pages.  The Advanced Searching page in the catalog, however, is one page that uses not an HTML form, but a token which calls up a form, since it’s more complex (combines terms, searches several indexes, etc.).

That called form assumes it’s the only one on the page, which conflicts with the one in the stock header.  Normally, you avoid this by giving specific names to the forms and referring to those, but when using the token to call up the form, I didn’t have the option to change the token-called script.

Trying to use Advanced Search in the proper box resulted in the form trying to get information from the box in the header search box instead (since that was the first form encountered on the page), and then telling me I needed to enter something there.

Answer: I entered the toplogo completely in normal HTML on this one Advanced Search page, instead of calling it with the ‘toplogo’ token.  That way, I could comment out the search box in the header so only this page is without that search box in the header.  Now the only working search was the one called by the Advanced Search token script.  Conflict eliminated.

If I change the header, I’ll have to remember to change this page as well.  However, the date script still works as usual, so no extra concerns on that.


CSS (Cascading Style Sheets) are multiplying in the catalog.  III includes one of their own (untouchable since it operates some of their specialized functions), and we can override aspects of that and augment it with another, which we do. Now I added the one for the new style pages.

That meant that I had to make sure that adding the style sheet Joni created wouldn’t conflict with any names in the other style sheets.  Then I added that to the list of style sheets to check when a browser creates a page on the screen.

Then there was minor tweaking to get the CSS to work within our catalog server.  This included some little spacing things to allow for (as usual) Internet Explorer not working to the same standards as other browsers, but that worked out.

I commented a lot in the catalog version of the CSS file as to what I did to make it different from the version in the home version.  I may need to know all that some day.

Testing Browsers

I’m testing with Firefox 3.5 and Internet Explorer 7 and 8 (although 8 is not approved on our campus at this time as it won’t work properly with our version of some instructional software elsewhere on campus), as well as the current Windows version of Safari (since I don’t have a Mac to test) and current Google Chrome.

Progress is being made!

New website is up

After some minor complications, the new library website is up at library.uafortsmith.edu although it’s not “official” as yet.

We tried to have it hosted off-campus, but it kept having trojans added to the javascript files (host security was not under our control), so we had it moved to an on-campus server.

Joni Stine gets credit for about 99.75% of it, and I tried to keep to her templates for the basic design on the pages I’ve been adding or updating.

Joni, incidently, had to move so she (reluctantly) left and went to a new job elsewhere in Arkansas.  We’re preparing to interview candidates for her position in January.  She did leave us with almost all of the site just about ready to go, and I did a few additional tweaks and then began converting pages and links.  Thanks, Joni!

Server Side Includes (SSI) have one little quirk — you can’t use them in the home (index) page.  So, we have some stuff we have to code on the index page, which can just go into a file and be called by the other .shtml pages.

We’ve got the day/date/today’s hours script working at the top with a little JavaScript I borrowed a while back.  The left side (on most pages) and the footer are SSI files (except on the home page).

Joni found a neat little script that gives an elegant rotating image so we can do pictures without taking up a huge amount of bandwidth.

I’ve tested it out in IE, Firefox, Chrome and Safari (on Windows only) and will see about trying it on a widescreen laptop ASAP.

Custom Search

I wanted to use a custom Google Search Engine.  Supposedly, according to instructions, Google allows you to submit a txt file with your pages to get them indexed quickly.  Supposedly.

After being looped around and around from an instruction page to a page without the links mentioned in the instructions, I gave up and just copy and pasted the links into the box provided for simpletons, since apparently somebody didn’t make the procedure simple enough to work the way Google claimed it should.  At least, not today.  Ah, well.  The indexing took place, so we can work with that for a while.

Now We Try It Out

Now, we try it out.  If all goes well enough, we have the campus pages changed to link to it, and introduce it at the start of the spring semester.

Crossing my fingers….