Keep it small and simple

The project ‚Virtual Guide to the Flora of Mongolia – Plant Database as Practical Approach‘ was often asked: Why did you restrict yourself to the region of Mongolia?

Here are the TOP 3 of our answers:

  1. Because of long-term research cooperation between Mongolia and Germany resulting in comprehensive herbaria collection
  2. Because of name-space requirements: plant names are valid within the namespace of the floristic region, just like a variable is valid within the namespace of the programming context
  3. Because of limited project staff: the web application had to be that smart, well-arranged and user-optimized that it could be implemented, programmed and maintained with minimum of staff

text by: U. Najmi

Keep it clearly presented

The project ‘Virtual Guide to the Flora of Mongolia – Plant Database as Practical Approach’ was often asked: Why did you restrict yourself to just a few scans per taxa? Having the opportunity, you should have digitized all herbarium sheets you could get hold of.

Here are the TOP 3 of our answers:

  1. The aim of the project was an approach to a virtual flora, not a virtual herbaria. Although these two have many things in common, they should be implemented seperatly.
  2. Usually, we are able to overlook a arrangment of five to seven items. Being confronted with more items at once, we tend to feel overwhelmed.
  3. There are herbarium specimen that are really worth being digitized, and there are others that are not. We just skipped the others.

text by: U. Najmi

Keep it searchable

The project ‘Virtual Guide to the Flora of Mongolia – Plant Database as Practical Approach’ was often asked: Why did you take the effort and typed in all that metadata? Don’t you think it is just enough to have the metadata on the digitized herbarium sheet?

Here are the TOP 3 of our answers:

  1. Because we could do it (and the list import tool helped us a lot).
  2. Because we aimed for a virtual approach to the flora that could be explore in many categories such as: who has collected this record? Who has determined it? Who revised it? Which herbaria does it come from? What habitat did the plant grow in? Was the plant flowering resp. fruiting?
  3. Because it was less effort to enter the metadata before resp. while scanning herbarium sheet than entering the metadata afterwards.

But you are right: the older the herbarium specimens, the less metadata were we able to enter.

text by: U. Najmi

Keep it searchable II

Users do have different preferences while exploring data. Therefore we offered right from the start two different ways to explore the data.

  • Targeted search: Enter your search parameter, and recieve a customized result list.
  • Overview search: Browse through the contents, and be aware of the icons that indicated ‚here are scans‘, ‚here are photos/images‘, ‚here are habitat photos‘. Climb up and down in the taxa hierarchy: from family to genus to species level and vice versa. Switch from taxon view directly to record view and back. Find your way to explore the various data presented here.

text by: U. Najmi

Keep it easy to link to

It’s no effort to browse through numerous taxa data descriptions, record data collections and image galleries – if you aren’t in a hurry. It’s no effort to enter a specific search term and get the result list – if you remember this specific term.

But it is really easy to just bookmark the page where you found this interesting taxon information, that new herbarium specimen record, this beautyful plant picture, and share the URL with other interested users in your community, as you can do with any page offered on this site.

All search parameter, record IDs, image IDs, taxon IDs are available in the URL, because all requests are implemented as GET-request. That limits the number of possible parameters, but it makes possible the easy click-and-link.

text by: U. Najmi

Keep it searchable III

It’s no magic to be found in the world wide web, it’s just a question of SEO – search engine optimization.

There are hundreds of Megabytes of literature on SEO, so I just want to repeat the most common tipps:

  • Make sure any page on your site is linked to at least one other page of your site. Imagine a search engine being a spider crawling a net – it just won’t jump from page to page when there’s no link between. Imagine a search engine as a curious monkey – it will collect any information on your website if it’s only able to move from page to page. Search engines work as a guidebook, so make sure the automatic editors of this book can access your webpage, crawling like a spider or moving like a monkey.
  • Make sure any picture on your site is tagged with metadata such as a title. Remember the spiders and monkeys – they lack unfortunately the ability to appreciate the content of the pictures, but they do read all subtitles.
  • Make sure that you try to optimize your web site first if users still won’t find what they looking for although you think they really should be able to. Don’t be afraid of feedback, but have a benefit from these remarks in improving your website.

text by: U. Najmi

Keep it stable to link to

All things change, as time goes by. Outdated bookmarks and dead links are common problems. Is there a solution?

As known from other projects, a folder structure mapped into a URL is a crucial point. That’s why we tried to prevent to map folder structure into URLs. A typical URL in our project is e.g.: http://floragreif.uni-greifswald.de/floragreif/?flora_search=Record&record_id=4709.
First we have a ‚header‘: http://floragreif.uni-greifswald.de/floragreif/, this won’t change unless we decided to change the domains name, which is unlikely, but still possible.
Then, we have a ‚parameter that indicates what to search for‘: ?flora_search=Record, records in this case, and ‚further parameter to refine the search‘: &record_id=4709. In this case, it’s the object ID of the record.
In our project, as in many others, too, it’s within the responsibility of the administrators to keep the URL stable to link to. All the user can do is to trust in the administrators knowledge and experience.

If you are not sure whether you are able to guarantuee stable URLs to the users of your homepage, you may find it helpful to use the DOI (digital object identifier) service. How does that work? You compile a list of URLs and send them to your DOI registry service. This service will generate DOIs according to your list. From now on, your users will bookmark the DOI instead of the URL.
As time goes by, and you want to restructure your website, you just generate a list of the new URLs, and send them to your DOI registry service. The users, having bookmarked the DOI instead of the URL, often won’t even notice that your restructured the content on your webserver.

Summary:

  • A URL is called stable if the expected content is still available.
  • A homepage that went off-line has no chance to keep its URLs stable.
  • After a restructuring/reorganization of contents it’s very likely that URLs are no longer available.
  • It is good practice to redirect requests for out-dated URLs after reorganization of content.
  • Check the opportunity to use DOI registry service.

text by: U. Najmi