The journal impact factor is a sham and a crock and a delusion, let's just take that as read. (If you don't care to take that as read, which is a healthy and sane attitude—take no one's word as gospel, especially not mine!—start here or perhaps here and keep going.) Using it to judge individual researchers' output, never mind the researchers themselves, verges on the criminal, is my strong belief. I'm not against heuristics, but some heuristics are plain broken, and the journal impact factor is one of those.
So it really hurts my heart to see librarians giving this flawed number credence. Librarians! We who call ourselves information experts!
I won't link-and-shame, despite temptation. I'll just say that in the last week, I've come across one library blog posting a list of "here are high-impact journals in X discipline" and another library doing a workshop on "use the impact factor to choose where you publish!"
We are better and wiser than this. I hope we are better and wiser than this.
Of course we can't ignore the impact factor. That's a very long way from saying that we ought to celebrate, support, or draw positive attention to it. When we mention it, we should wrinkle our librarianly noses in disdain. When we teach workshops on it, our attitude should be "look, this system is bad and wrong and I'll happily show you why, but we're stuck with it until everyone wises up, so here's how to game it as best you can." When we look over serials subscriptions, we should frankly ignore it.
It doesn't hurt that by breaking the back of the impact factor, we're reducing the influence of many of the very journals whose inflated prices are breaking our backs.
We have authority and power. This is one very serious situation in which we owe it to our researchers and ourselves to use it wisely.