The Emoji Timeline

This post was originally published on the Cooper Hewitt Labs blog.

We made an emoji timeline. You can see it on any object page — it looks like this:

Screen Shot 2015-11-19 at 3.44.57 PM

I want to frame the work we did on the timeline around web accessibility. Web accessibility — the idea that something should be fully-usable for all users regardless of disability status — is something we need to place as a higher priority on our websites. We do things like subtitling all of our videos and releasing the .srt file over our API and employing progressive enhancement to ensure that no content is withheld from visitors using older / alternative web browsers. But turning on VoiceOver and having my computer read an object page recently revealed some of the many opportunities to improve accessibility on the site.

A few months ago, we started talking about updating our previous timeline to integrate it with the new object page layout we’d recently completed. I opened up NYPL’s Digital Collections because I really like their timeline (it’s at the bottom of the page) and wanted to use it as a conversation-starter for how we could improve ours.

We quickly realized the accessibility concerns that something like this poses. The two main concerns were first that a purely-Javascript solution, as we had before, could be troublesome for someone using a screen reader, so it would need to have a “non-JS” version. The second was that a timeline that relies on color to differentiate events could be troublesome for a colorblind person, so we would need to provide a legend.

The first issue would be easy enough to solve. We would add the timeline’s contents to the markup (HTML) first and then add any extra functionality on top of it.

The thought process for the second issue (use of color) went like this:

Color will be useless so we will have to use shapes as well. But then we’ll need a legend because color or shape don’t have anything to do with timeline events like “date created” or “date exhibited.” But legends are a bummer — so old media! Maybe we could use symbols instead? Maybe we could use EMOJI?!?!?

Beyond the fact that their symbolic nature transcends the “map a color to an idea” style of legend, emojis further a second goal that we have with the collections site, which is to “avoid feeling like a database.” They allow for novelty, comfort and humor to be injected into the digital world as quickly as typing a letter on a keyboard. In fact, as I write this post, the Oxford English Dictionary announced that the crying face emoji is their word of the year.

Thus, the emoji timeline was born. No legend required, compact in size and fun to look at!

The vast majority of objects have a timeline that looks like this:

Screen Shot 2015-11-19 at 3.49.05 PM

We acquired it in 1941 as a gift. We took a picture of it in 2015, and we’re happy that you found it now! (We borrowed the “found by you” idea from NYPL; it makes me feel so warm and fuzzy.)

There are some other ways that we acquire objects, and we use different emojis to signify them. We acquired this object by bequest, so it gets a scroll:

Screen Shot 2015-11-19 at 3.48.37 PM

Some objects are “found” in our collection; these get a magnifying glass:

Screen Shot 2015-11-19 at 3.53.05 PM

We purchased this object, presumably with dollars:

Screen Shot 2015-11-19 at 3.57.20 PM

Some objects have more frantic timelines:

Screen Shot 2015-11-19 at 4.16.47 PM

The default, emoji-free experience looks like this:

Screen Shot 2015-11-19 at 12.11.47 PM

You’ll notice a lot of cameras, too. This means that we photographed the object that year, and the icon serves as a link to take you straight to that picture – see this sidewall as an example. We actually just limit one photograph per year, because some of our objects have a lot of photos and the timeline started to look like this:

Screen Shot 2015-11-17 at 4.14.30 PM

Exhibitions, symbolized by the big eyeballs, are also clickable. They will take you to the exhibition page to see all the objects that your current object was displayed alongside.

I don’t mean to say we’ve solved accessibility issues through this timeline. As I mentioned before, a concerted look at our site’s accessibility issues will be an important next step for us.  But like our previous timeline, it’s more of an opportunity to experiment than anything else. In this case, we’re experimenting with ways to approach a project by talking about accessibility first. And of course, we’re experimenting with emoji. Always be experimenting with emoji.

Pull up a random object and check it out!

Sorting, Synonyms and a Pretty Pony

This post was originally published on the Cooper Hewitt Labs blog.

We’ve been undergoing a massive rapid-capture digitization project here at the Cooper Hewitt, which means every day brings us pictures of things that probably haven’t been seen for a very, very long time.

As an initial way to view all these new images of objects, I added “date last photographed” to our search index and allowed it to be sorted by on the search results page.

That’s when I found this.

110747_470f2e4c7611261e_z

I hope we can all agree that this pony is adorable and that if there is anything else like it in our collection, it needs to be seen right now. I started browsing around the other recently photographed objects and began to notice more animal figurines:

110852_dc5f8ad6ab8c53c1_z 109502_2502a0cfe13e9f31_z

As serendipitous as it was that I came across this wonderful collection-within-a-collection by browsing through recently-photographed objects, what if someone is specifically looking for this group? The whole process shows off some of the work we did last summer switching our search backend over to Elasticsearch (which I recently presented at Museums and the Web). We wanted to make it easier to add new things so we could provide users (and ourselves) with as many “ways in” to the collection as possible, as it’s those entry points that allow for more emergent groupings to be uncovered. This is great for somebody who is casually spending time scrolling through pictures, but a user who wants to browse is different from a user who wants to search. Once we uncover a connected group of objects, what can we do to make it easier to find in the future?

Enter synonyms. Synonyms, as you might have guessed, are a text analysis technique we can use in our search engine to relate words together. In our case, I wanted to relate a bunch of animal names to the word “animal,” so that anyone searching for terms like “animals” or “animal figurines” would see all these great little friends. Like this bear.

110995_0048b6b3bc272bd1_z

The actual rule (generated with the help of Wikipedia’s list of animal names) is this:

1
 "animal =>; aardvark, albatross, alligator, alpaca, ant, anteater, antelope, ape, armadillo, baboon, badger, barracuda, bat, bear, beaver, bee, bird, bison, boar, butterfly, camel, capybara, caribou, cassowary, cat, kitten, caterpillar, calf, bull, cheetah, chicken, rooster, chimpanzee, chinchilla, chough, clam, cobra, cockroach, cod, cormorant, coyote, puppy, crab, crocodile, crow, curlew, deer, dinosaur, dog, puppy, salmon, dolphin, donkey, dotterel, dove, dragonfly, duck, poultry, dugong, dunlin, eagle, echidna, eel, elephant, seal, elk, emu, falcon, ferret, finch, fish, flamingo, fly, fox, frog, gaur, gazelle, gerbil, panda, giraffe, gnat, goat, sheep, goose, poultry, goldfish, gorilla, blackback, goshawk, grasshopper, grouse, guanaco, fowl, poultry, guinea, pig, gull, hamster, hare, hawk, goshawk, sparrowhawk, hedgehog, heron, herring, hippopotamus, hornet, swarm, horse, foal, filly, mare, pig, human, hummingbird, hyena, ibex, ibis, jackal, jaguar, jellyfish, planula, polyp, scyphozoa, kangaroo, kingfisher, koala, dragon, kookabura, kouprey, kudu, lapwing, lark, lemur, leopard, lion, llama, lobster, locust, loris, louse, lyrebird, magpie, mallard, manatee, mandrill, mantis, marten, meerkat, mink, mongoose, monkey, moose, venison, mouse, mosquito, mule, narwhal, newt, nightingale, octopus, okapi, opossum, oryx, ostrich, otter, owl, oyster, parrot, panda, partridge, peafowl, poultry, pelican, penguin, pheasant, pigeon, bear, pony, porcupine, porpoise, quail, quelea, quetzal, rabbit, raccoon, rat, raven, deer, panda, reindeer, rhinoceros, salamander, salmon, sandpiper, sardine, scorpion, lion, sea urchin, seahorse, shark, sheep, hoggett, shrew, skunk, snail, escargot, snake, sparrow, spider, spoonbill, squid, calamari, squirrel, starling, stingray, stinkbug, stork, swallow, swan, tapir, tarsier, termite, tiger, toad, trout, poultry, turtle, vulture, wallaby, walrus, wasp, buffalo, carabeef, weasel, whale, wildcat, wolf, wolverine, wombat, woodcock, woodpecker, worm, wren, yak, zebra"

Where every word to the right of the => automatically gets added to a search for a word to the left.

Not only does our new search stack provide us with a useful way to discover emergent relationships, but it makes it easy for us to “seal them in,” allowing multiple types of user to get the most from our collections site.

How re-opening the museum enhanced our online collection: new views, new API methods

This post was originally published on the Cooper Hewitt Labs blog.

At the backend of our museum’s new interactive experiences lies our API, which is responsible for providing the frontend with all the data necessary to flesh out the experience. From everyday information like an object’s title to more novel features such as tags, videos and people relationships, the API gathers and organizes everything that you see on our digital tables before it gets displayed.

In order to meet the needs of the experiences designed for us by Local Projects on our interactive tables, we added a lot of new data to the API. Some of it was sitting there and we just had to go find it, other aspects we had to generate anew.

Either way, this marks a huge step towards a more complete and meaningful representation of our collection on the internet.

Today, we’re happy to announce that all of this newly-gathered data is live on our website and is also publicly available over the API (head to the API methods documentation to see more about that if you’re interested in playing with it programmatically).

People

For the Hewitt Sisters Collect exhibition, Local Projects designed a front-end experience for the multitouch tables that highlights the early donors to the museum’s collection and how they were connected to each other. Our in-house “TMS liaison”, Sara Rubinow, worked to gather and structure this information before adding it to TMS, our collection management system, as “constituent associations”. From there I extracted the structured data to add to our website.

We created a the following new views on the web frontend to house this data:

We also added a few new biography-related fields: portraits or photographs of Hewitt Sisters people and two new biographies, one 75 words and the other 50 characters. These changes are viewable on applicable people pages (e.g. Eleanor Garnier Hewitt) and the search results page.

The overall effect of this is to make more use of this ‘people-related’ data, and to encourage the further expansion of it over time. We can already imagine a future where other interfaces examining and revealing the network of relationships behind the people in our collection are easily explored.

Object Locations and Things On Display

Some of the more difficult tasks in updating our backend to meet the new requirements related to dealing with objects no longer being static – but moving on and off display. As far as the website was concerned, it was a luxury in our three years of renovation that objects weren’t moving around a whole lot because it meant we didn’t have to prioritize the writing of code to handle their movement.

But now that we are open we need to better distinguish those objects in storage from those that are on display. More importantly, if it is on display, we also need to say which exhibition, and which room it is on display.

Object locations have a lot of moving parts in TMS, and I won’t get into the specifics here. In brief, object movements from location to location are stored chronologically in a database. The “movement” is its own row that references where it moved and why it moved there. By appropriately querying this history we can say what objects have ever been in the galleries (like all museums there are a large portion of objects that have never been part of an exhibition) and what objects are there right now.

We created the following views to house this information:

Exhibitions

The additions we’ve made to exhibitions are:

There is still some work to be done with exhibitions. This includes figuring out a way to handle object rotations (the process of swapping out some objects mid-exhibition) and outgoing loans (the process of lending objects to other institutions for their exhibitions). We’re expecting that objects on loan should say where they are, and in which external exhibition they are part of — creating a valuable public ‘trail’ of where an object has traveled over its life.

Tags

Over the summer, we began an ongoing effort to ‘tag’ all the objects that would appear on the multitouch tables. This includes everything on display, plus about 3,000 objects related to those. The express purpose for tags was to provide a simple, curated browsing experience on the interactive tables – loosely based around themes ‘user’ and ‘motif’. Importantly these are not unstructured, and Sara Rubinow did a great job normalizing them where possible, but there haven’t been enough exhibitions, yet, to release a public thesaurus of tags.

We also added tags to the physical object labels to help visitors draw their own connections between our objects as they scan the exhibitions.

On the website, we’ve added tags in a few places:

That’s it for now – happy exploring! I’ll follow up with more new features once we’re able to make the associated data public.

Until then, our complete list of API methods is available here.