Sunday, May 18, 2008

Canada's National Portrait Gallery Comes out of the Closet

In the nearly two months since I've last blogged, I've finished up my coursework at Western and have embarked upon the final leg of my journey towards being a master of history -- my internship. I am two weeks in to a sixteen week stint at Library and Archives Canada, working in the Government Records branch. The internship program in its current form is quite new, but already I am very impressed with the number of professional development opportunities that have been made available to me and the other interns. One of these presented itself last Thursday when we attended an open house at the downtown branch of LAC, where the public services are located. The day included a variety of tours, including one of the current exhibit, a joint project with NARA (the American version of LAC) featuring a copy of the Treaty of Paris, and one of the National Portrait Gallery.

Before last week, I had never really realized that we even had a National Portrait Gallery. In fact, the only one I knew of was the spectacular institution in London, England, where I have spent many blissful hours in the Tudor and Stuart galleries. This institution is affiliated with its nextdoor neighbour the National Gallery, but not with the National Archives. By contrast, our version, an affiliate of our national library and archive, is located in an office space, barred to the public. The NPG is unique because it has an active collecting mandate that is more reminiscent of an art gallery than of an archive. It remains part of LAC because of its long history as a repository of nationally significant works, but it also actively purchases, and even commissions, new works of art and is constantly trying to challenge our traditional notions of what a 'portrait' is. And all the while, the majority of its collections and are housed in the Gatineau Preservation Centre and rarely seen by the public.

Currently, there is a movement afoot to set up the National Portait Gallery as our newest national museum (joining the likes of the Canadian Museum of Civilization), but things are complicated. Cities across Canada have been invited to compete for the privilege of hosting the institution. This has opened up a floodgate of controversy. What would it mean for a national institution to be in Calgary or Halifax instead of in Ottawa? How will we pay for it? (This is designed to be a public-private partnership, so the chosen city would have to foot part of the bill). What are the financial repercussions of physically moving the collection, which includes many old and valuable works? This topic has been hotly debated between those who believe that the costs of setting up the museum outside of Ottawa outweigh the benefits, and those who champion the idea that national institutions should be located across out nation, and not centred in our capital.
I haven't figured out exactly how I feel about these questions, but they are interesting ones, and
I'll be sure to follow the debate as it unfolds. Still, wherever it ends up, I look forward to being able to see this beautiful collection the way it was meant to be seen -- in a museum, open to the public, and not in a small and crowded storage facility.
Image courtesy of http://www.portraits.gc.ca/. Part of NPG's collection, it is an image of Japanese-Canadians being relocated to camps in BC in 1942 and was taken by an unknown artist.

Sunday, March 23, 2008

Blogging from the Trenches

Few would dispute that the internet has changed the way we learn. In the early days, websites were fairly static and unchanging, and so all the internet really did was increase the ease with which we could access information. Websites were, in effect, little more than the equivalent of books or articles available on a computer screen. But we are now in the age of web 2.0, a collaborative era in which we communicate and share information more instantaneously, using newer technologies like wikis, blogs, and social networking sites. Because the web has become such a fluid thing, we can learn and correspond in an ever more immediate and interactive way.

This can have interesting consequences for the study of history. On example of this is the grandchild of a First World War soldier who had the idea to use a very simple format – a blog like this one – to stimulate interest and engagement in history. Bill Lamin, a native of Cornwall, England, used his grandfather’s wartime correspondence to create entries in a blog created under his grandfather’s name. What, for me, is the best part of this experiment is that he does it in real time, posting correspondence exactly ninety years after they were written. This creates an amazingly immersive experience. The blog reader finds themselves in the position of the family at home, waiting, day by day, to see how the story turns out. (And, just to be clear, he already had a son before he left for war, so we really don’t know how the story will end). The blog begins with posts explaining the project and introducing readers to Harry and his family. Starting in mid-1917, there are an impressive collection of letters from Harry, and there is a new post on every day for which his grandson has a surviving letter, and there are also scanned images of postcards, envelopes, certificates, and any other documents relating to the family history of the time that have been found.

This strikes me as a beautiful way for Lamin to share his family history with others, but it is also a very effective way to create interest in this very important period of history. Personal stories always make history more relevant, but what is so unique about this project is the real-time element. Lamin may be “just” a school teacher, and not an historian, but he has succeeded in creating an amazing experience that is immediate, personal, and very interactive. He has also received international attention for his efforts. Lamin has plans to publish the letters into a book, but I would encourage anyone who is interested to check out the blog as soon as possible, while you can still experience this project as it was originally conceived.

Tuesday, March 18, 2008

Do you really need to know what Freud had for breakfast?

A common topic of discussion in several of our classes this year has been privacy. To what extent should we consider the privacy of historical figures? Archival repositories have been struggling with these issues for years. Families of deceased persons whose personal papers are bequeathed to an archive may wish to control access to certain documents and thus, the data that they may contain. For example, when Sigmund Freud’s papers were deposited at the Library of Congress, many of them were sealed for decades, with restrictions imposed until, in one case, 2113. While most of these documents have now been made public, some restrictions still exist, seventy years after his death. The reason for this was the concern of his daughter, and the psychoanalyst who was in charge of the Freud Archive, that he might be exposed to unfair criticism. But of course, no-one can actually own a reputation. [1] Information is information, and should be made available to the public as much as possible – right?

In our current, digital age, the issue of privacy becomes much more pressing. At least Freud could control what records he left behind, even if he cannot control who sees them from beyond the grave. The documents he left behind are of a conventional nature – things like letters and diaries that he actively and knowingly created. But we now live in a world where it is possible that we are leaving behind a trail of evidence of which we are not even aware.

In a previous post, I discussed spimes and possibility of a future in which all objects are connected in a sort of wireless network, so that their own personal history is recorded. The issue of privacy comes in when the objects you buy in the store are imbedded with chips that allow them to be tracked once you take them home. So it’s not just about the record you leave through email and telephone conversations, or through your diary and handwritten letters like Freud. Now it’s about what you buy and what you do with your possessions. The technology to imbed objects with small microchips and monitor its location already exists. It’s called radio frequency identification – or RFiD – and has been around since the Second World War. But in the past few years its potential implications have become more apparent and have been cause for concern, even alarm, among some. In a world that is increasingly monitored, where we are ever more frequently under the scrutiny of surveillance cameras, doesn’t this seem like the next logical step?

This technology could certainly have interesting implications for studying history, as I intimated in my previous post. But the already complicated issue of privacy just gets even stickier. Just as Freud’s daughter didn’t want the less flattering elements of her father’s records to come to light, who would really be ok with the world knowing their every purchase after they died? Aren’t some things, after all, better left unknown?


1. Joseph L. Sax, “Not so Public: Access to Collections”, RBM:A Journal of Rare Books, Manuscripts, and Cultural Heritage vol. 1 no.2, 101-105

Tuesday, February 19, 2008

World Fairs: Just Another Flash in the Pan?

My first real memory is from 1986 – or rather, my first string of meaningful memories. I drove across the country with my mother and grandfather to attend my aunt’s wedding in Vancouver and the World Exposition that was being hosted in the same city. To be honest, my actual recollection of the Expo itself is almost non-existent, but I do have a rather impressive collection of memories (and a few photos -- one of which I've tried, without much success, to include here) from the trip. I’ve been thinking about that summer a lot lately, as the topic of world fairs and expositions has come up several times in class discussions and readings.

The idea of a World Fair is an interesting one in its relation to museums, department stores and carnivals – all places where people gather for entertainment, education or socialising. During the 19th century, World Fairs could be incredibly significant social events – think of, for example, the Great Exhibition held in London in 1851 (the first truly international exhibition) or the World’s Columbian Exposition held in Chicago in 1893. These early fairs were showcases of technology and industry, but they also involved dazzling displays of culture from around the world, an exciting event at a time when the world was a lot bigger than it is now. They were also great opportunities for cities to create lasting architectural monuments or public spaces – the most famous of these being, of course, the Eiffel Tower (not to forget the Space Needle).

Are these international events still relevant today? Perhaps the last truly important World Exposition was held in Montreal in 1967, during a time of promise and excitement. It was a time to celebrate the Centennial of Canadian Confederation, and the Expo involved truly innovate architectural projects like the Habitat apartment complex. But by the early 1980s, a world fair held in Louisiana had to declare bankruptcy. There hasn’t been one in North America since 1986, and most of us are unaware that they, in fact, are still held every 2-3 years – usually in Europe or Asia.

And so, since, in North America at least, world fairs are almost forgotten, what exactly is the point of them? The original idea was to promote international co-operation and friendship. Before international travel was as easy as it is now, it was a chance for people to ‘visit’ foreign lands and learn more about the world around them. But in an increasing global world, it seems like there is too much to compete with our attention for world expos to have much success. Public spaces or architectural designed for the events can end up underused or ridiculed – see my classmate Sarah Waugh’s recent blog post on her experience in Seattle, or think back to ever-controversial Habitat.

Expo ’67 was certainly something that meant something to Canadians – both my parents made the trip from Winnipeg to Montreal as children and still have very significant memories of it. But Expo ’86 didn’t have quite the same impact. And I’m not convinced that a World Fair ever will again.

For those who are interested in the history of world fairs, check out this website.

Wednesday, February 13, 2008

For these Ruby Slippers, There's No Place Like Home . . .

In my museology class today, our professor brought to our attention a recent "controversy" involving Oprah Winfrey and a pair of ruby slippers apparently worn by Judy Garland during the filming of The Wizard of Oz. These shoes are usually on display at the Smithsonian Institute's American History Museum in Washington, D.C. but were flown specially to Chicago for a recent episode of Oprah's show, travelling first-class with the protection of armed guards. The "controversy" arose when, even as the museum director Dr. Brent Glass tried to get across the message that the shoes were fragile and needed proper care, Oprah insisted that she be allowed to pick them up, and then allegedly waved them around, and according to one viewer who posted on a discussion board dedicated to this topic, "the slippers touched and touched hard as far as I can tell." And she'd checked her tivo. I can't, however comment the specifics of her handling of the shoes -- the video has disappeared from youtube, and I can't find it to verify for myself.

So what's at issue here? Basically, it comes down to the fundamental question of access versus preservation. As Dr. Glass said, not everyone would have the chance to visit D.C. and see the shoes for themselves. But by taking them out of the museum, putting them on a plane and subjecting them to the whims of Oprah Winfrey, the museum was putting them at risk of irreparable damage. And for what gains? Well, the show was certainly good publicity for the museum. And Oprah got to have her fun. But seeing the ruby slippers on video in no way compares to seeing them in person.

At the end of the day all museums have to face the difficult question of whether to keep fragile artifacts in storage where they are safe from deterioration or to put them on display where the public can learn from them and enjoy them. So, too, must archivists face this important challenge. Modern technology, and especially digitization, can go a long way to improve access to fragile documents and artifacts. But I don't think anyone would argue that there is something very special about actually being in the presence of an historic artifact, and millions of visitors to the Smithsonian each year feel that way about the ruby slippers.

There is no easy answer to this question. But we can only hope that Oprah's little faux-pas did some good, after all: maybe it got people thinking about the objects they see in museums, and how important it is to take good care of them.

Saturday, January 26, 2008

How is a Book like a Can of Beans?

After my recent post on spimes, a reader comment pointed me to Clay Shirky’s 2005 talk Making Digital Durable: What Times Does to Categories. I was immediately intrigued and would recommend it to anyone interested in categorization, social tagging and preservation of information in the digital age. Shirky, who, according to Wikipedia, teaches New Media at NYU’s Interactive Telecommunications Program, has a fascinating and engaging way of exploring the world of digital and analog information systems.


What interested me most were the questions of how we organize what we know and then how we find it again. Shirky examined the flaws of classic library classification schemes like Dewey Decimal and Library of Congress. They have always been necessary, he says, because a book is like a can of beans – without the label, how can you tell a can of chick peas from a can of tomatoes? You have to find some way to organize them, and a flawed system – even one that has nine categories for Christian religion and only one for all the others – is better than none at all. But information on the internet is fluid, no longer encased in tin cans, and isn’t classified by one people or by a hundred but by millions. The problem with rigid classification schemes found in traditional libraries is that they are necessarily hierarchical and cannot overlap. The beauty of folksonomy – the process of collaboratively creating tags to manage and classify content on the internet – is that this rigidity no longer exists.


There are of course many potential problems when using this kind of system to classify information. Without terminological control, multiple words can be used to tag the same concept, and the same word can be used to tag many different concepts. We have to ask ourselves which we would prefer: a flawed, but predictable, system designed by professionals, or a spontaneous and ever-changing collection of tags created by the anonymous crowd? Shirky suggests identifying communities of practise, which means you can look at the tags just from the people that you care about – those with the same interests or expertise as you. A controlled vocabulary could be created, but it would need to evolve gradually, or the whole thing will simply return to the kind of hierarchical system we already have. In the end, this sort of collaborative classification is generally quite useful – and can be used to tell us about the way people think about information.

But would this kind of thing work on the scale of an academic library? Even though our current classification schemes are flawed, we all know how to work within them. But what would it be like if we could tag the books we take out of the library? We wouldn’t have to get rid of the Library of Congress, but imagine the links and connections we could find, and what we could learn about the users of these books. Maybe books don’t have to stay like cans of beans forever.

Tuesday, January 15, 2008

The Way of the Future?: Spimes and the Internet of Things

I usually don’t understand much that goes on in the digital realm; computers are generally beyond me. In the past few months, I have only just begun to develop an appreciation for the far-reaching implications of changing technology – the way we obtain, organize and process information is undergoing a revolution right under our noses, whether we pay attention to it or not. In our digital history class, we have discussed technologies like OCR (optical character recognition) that are probably quite obvious to the computer-savvy among us, but I have to be honest – it had never crossed my mind to wonder how Google or JSTOR worked, even though I use them all the time. Although it has sometimes been a struggle for me to wrap my mind around these new concepts, I am starting to appreciate how important it is to try and understand this very foreign world. And when I came across something called a spime in one of our readings for class, I actually got kind of excited.

A spime is a term coined by Bruce Sterling, a science fiction novelist and design critic, to explain a theoretical object that has a computerized tag that can identify it and allow it to communicate. It can be precisely located, and tracked, in time and space. They are networked and reveal metadata about themselves. Owners would be able to personalize this data. Eventually, all of this information would become an Internet of things, through which we could see relationships between objects and users.

Sterling first introduced the idea in 2004, when he coined the term for the theoretical object because “it needs a noun so that we can think about it”[1]. Now, Spime Inc. is a Silicone Valley company which advertises several products on its website, several of which are software for mobile phones, but one of which also claims that “It can be deployed at homes to track your assets and locate children”. I’m certainly not going to pretend that I understand the technology or even most of what’s on the website. But I am intrigued.

There are several interesting implications. Sterling emphasized the non-renewability of many of our resources and the importance of knowing the resources we have, and using them:

“Our material culture is not sustainable. Its resources are not renewable. We cannot turn our entire planet's crust into obsolete objects. We need to locate valuable objects that are dead, and fold them back into the product stream. In order to do this, we need to know where they are, and what happened to them. We need to document the life cycles of objects. We need to know where to take them when they are defunct.
In practice, this is going to mean tagging and historicizing everything. Once we tag many things, we will find that there is no good place to stop tagging.”
[2]

So yes, there are very interesting implications for industry, and for the potential to improve the products we use every day. But there are also implications for history. Imagine if every object dug up in an archaeological expedition was a spime. What more could that teach us? How much easier would the jobs of historians be? It’s true that we are talking about an internet of things, not of ideas, and while objects are incredibly valuable tools for understanding history, they alone are not enough. Nevertheless, this idea could have far-reaching implications for how we understand the present and (in the future) our past.

Despite the possible problems with theft, fraud, and invasion of privacy, the idea of being able to embed our objects with this kind of tag is an intriguing one. Interestingly, however, although three and half years have passed since Sterling first coined the term (and speaking of new ways of organizing information), there is still no Wikipedia entry for spimes. It seems the internet of things has not arrived quite yet. But I’ll be ready when it does.

[1]Bruce Sterling, "When Blobjects Rule the Earth", SIGGRAPH, Los Angeles, August 2004
[2] Bruce Sterling, "When Blobjects Rule the Earth", SIGGRAPH, Los Angeles, August 2004

Friday, January 4, 2008

Sometimes we'd rather be drinking . . .

I spent the holiday season this year in London (England, not Ontario) and visited a few of my favourite museums. To be honest, however, I spent far more time hanging out with my friends and telling myself I should be visiting museums than actually visiting them. But on the occasions that I did actually make it to one of these hallowed halls of learning and culture, I couldn’t stop thinking about Tony Bennett’s "History and Theory" (in The Birth of Museums: History, Theory and Politics) which we read in our public history class last semester. In his historic discussion of museums, fairs and exhibitions, he argues that they “regulate the performative aspects of their visitors’ conduct” and are “instruments capable of lifting the cultural level of the population.” [1]

London is full of world-class museums, but also of other tourist attractions like the London Eye and Madame Toussaud’s that exist purely to entertain (or, as the more cynical might say, to make money). In the past decade, most of London’s best museums have waived their entry fees and so, in theory at least, welcome every segment of the population. But the brutal truth is that most people who live in London will never make it to the V & A or the National Gallery. I know – I worked in Hackney, one of the roughest areas in the city, for a year and met many people who barely knew these places existed, let alone had the remotest interest in visiting them.

The tone in these museums is very definitely one of high-brow elitism. The guided tours are led by genteel retired women who, if they don’t exactly take on an attitude of superiority, assume a considerable general knowledge base that is probably beyond the scope of many of their visitors. Said visitors walk around slowly with their hands clasped behind their back, high heels echoing through the marble halls. Maybe Bennett is on to something – our behaviour is regulated once we enter a museum. This is not the real world.

And many of us are guilty of a certain sense of superiority when we visit one of these institutions. There is a sense that is it is a far better use of our time than shopping or gossiping with friends at lunch. There is a sense that we are better people for exposing ourselves to “high culture”, whatever that is. I’m guilty of it myself. Several times during my trip as I lay on the couch watching tv or sat in the pub with a pint I felt a certain stab of guilt that I wasn’t out lapping up all the best that London had to offer. But in the end, I enjoyed my trip just fine. And I spent a total of about three hours in museums, most of it in gift shops.

Do museums really lift the cultural level of the population? Sometimes they most definitely do. During the year I spent in London, I honestly did visit museums quite often, sometimes returning again and again, and some exhibits I saw introduced me to new information and schools of thought that have remained with me to this day.

But let’s stop faking it. Sometimes museums are great. And sometimes there’s nothing wrong with just going down the pub instead.

[1] Tony Bennett, “History and Theory,” The Birth of the Museum: History, Theory, and Politics
(London: Routledge, 1995), pp. 6-7


Image of Henry VIII courtesy of http://www.npg.org.uk/live/index.asp

Image of the Tate Modern courtesy of me