minerva homepage


 
  About Minerva  
  Structure  
   
  NPP  
  Good practices  
  Competence centres  
  Digitisation guidelines  
  European and national rules on the Web Applications  
  Enlargement  
  Events  
  References  
  Publications  

home  |  search  |  map  |  contact us  
Path: Home | Events | bibliocom


 

International workshop
Rome, October, 17th 2002, Palazzo dei Congressi, Bibliocom 2002

Quality in cultural Web sites

edited by the Ministero per i Beni e le Attività Culturali In conjunction with European Commission and Associazione Italiana Biblioteche, within the MINERVA Project




Nicoletta Di Blas
HOC-Dipartimento di Elettronica ed Informazione I, Politecnico di Milano, Italy
diblas@mail2.elet.polimi.it

Franca Garzotto
HOC-Dipartimento di Elettronica ed Informazione I, Politecnico di Milano, Italy
garzotto@polimi.it

Maria Pia Guermandi
IBC, Istituto Beni Culturali, Regione Emilia Romagna
MPGuermandi@ibc.regione.emilia-romagna.it


It works! A systematic method to evaluate the features of museum Web-sites



Abstract

MILE is a general framework for evaluating the quality and usability of hypermedia. It includes a specialized version for the evaluation of museum web sites, which provides quality criteria and procedural guidelines to be used for inspection and empirical assessment of the usability of museum Websites. This research experience has been carried on by Politecnico di Milano, and University of Italian Switzerland in Lugano, in cooperation with IBC (Institute for the Cultural Heritage of the Emilia Romagna Region).

Introduction

Expert evaluators who explore the application trying to identify usability problems perform the so-called ”inspection” methods. In contrast, empirical methods consist basically in the observation of a group of end-users actually using the application in a laboratory, under the guidance and the observation of usability experts.

MiLE (Milano-Lugano Evaluation Method, developed by Politecnico di Milano and USI – University of Italian Switzerland) consists of a very effective combination of the two methods: first, some (no more than 2/3) usability experts explore the application; second, panels of users are required to concentrate on those aspects in which the systematic exploration has some problem.

The key concepts of MiLE are the following:

  • Abstract Tasks (AT), that is, generic actions (generic in that they can be applied to a wide range of applications) capable of leading the inspector through the maze of the different parts and levels an application is made of, like the Ariadne’s thread.
  • Concrete Tasks (CT). They are a list of specific actions (specific in that they are defined for a single application), which users are required to perform, while exploring the application for the empirical testing.

Tasks are sketched on the basis of user’s scenarios, that is, “stories about use” of the application. A user scenario consists of the combination of a user profile - sketching the basic characteristics of a potential user in terms of some relevant data (age, technological expertise, occupation, etc.).- and a task (or a set of tasks), useful for achieving a given goal.

In order to make the results of exploration more punctual, MiLE suggests to separate different levels of analysis: technology, navigation, content, graphic, cognitive features. For each level a library of Abstract Tasks has to be prepared, when building the method, in order to support the inspection. For some levels (e.g. graphic or navigation) the Abstract Tasks can be at large independent from the specific application domain; for other levels (e.g. content) we shall have different tasks according to the application domain (i.e., specific tasks for the cultural heritage domain, for the e-commerce domain, and so on). When performing the inspection, the inspector has to check and measure a list of attributes concerning the different facets of usability/quality (e.g. richness, completeness, are attributes for the analysis of the content level), by executing the abstracts tasks related to the various attributes. For each attribute (in relation to a specific AT), a score must be given.

After the scoring phase is over the set of collected scores is analyzed through weights that define the relevance of each attribute for a specific goal. Weighting allows us a clean separation between the scoring phase (use the application, perform the tasks, and examine them) from the evaluation phase in strict sense, in which different possible usages are considered.

Let us introduce a simple example: assume that a navigation feature (e.g. using indexes) is not very powerful, but very easy to learn. What should be the evaluation? The inspector can provide a score (e.g. 9/10 for “predictability” and 2/10 for “powerfulness”) for the navigation. Later, figuring out two different user scenarios (e.g. casual users and professional users), the evaluator (possibly different from the inspector) could assign two different pairs of weights to the attributes “predictability” and “powerfulness”. The weights, for example could be «0.8 (predictability), 0.2 (powerfulness)», for casual users, or «0.1 (predictability), 0.9 (powerfulness)» for professional users. The weighted score for the navigation feature is very different of course (7.6 for casual users and 2.7 respectively), but it reflects the different users’ scenarios. The inspector could therefore conclude that the application (at least for this feature) is well suited for casual users, while it is somehow ineffective for professional users. Trying different weighting systems allows the evaluator to test different user scenarios, using the same set of scores derived from the inspection (we give a more detailed example of scoring and weighting in the last paragraph).

1. MiLE step-by step

The inspection with MiLE runs as follows:

  • Preparation
    • Sketch user’s scenarios
    • Choose the Abstract Tasks
    • Choose the Attributes
  • Inspection
    • Execution of the Tasks
    • Scoring of the Tasks (through the attributes)Evaluation
    • Define weights
    • Derive final scores

Inspection already provides valuable evaluations; in some cases, however, panels of users may be required for double checking. When empirical testing is required, users are given a list of Concrete Tasks, i.e. a list of specific actions that they are asked to perform. Concrete Tasks definition (different for each case) is based upon the results of the inspection, which has identified portions of the application, tasks and attributes that need special attention.

  • Empirical testing
    • Define concrete Tasks for users
    • Let users apply their tasks
    • Evaluate users’ reactions

The reliability of MiLE as a systematic evaluation framework has proved to be very high: the execution of the Abstract Tasks (at navigation and content level) helps spotting unexpected usability problems (inconsistencies, lacks of clarity, etc.). Even “at-first-sight agreeable” sites, when put to the test trough a systematic inspection “à la MiLE ”, may reveal weaknesses and defects.

2. MiLE for museum Web-sites

Some of the features (such as navigation or layout) of an application can be examined largely independently from a specific application domain; other features instead, such as content or functions offered to the users, require a different evaluation schema for each application domain. In order to explore functions and contents for museum Web-sites (a specific sub-domain, within the larger domain of cultural heritage applications), a specific panel of “experts” (the so called “Bologna group”) has been created, with a partnership between Politecnico di Milano, and Istituto Beni Culturali (IBC), regional organization supervising cultural heritage activities in Emilia Romagna region, with headquarters in Bologna.

The first step of Bologna group was to define a museum web site structure and to categorize all the possible pieces of content, functions and services in it; the resulting model is therefore a synthesis of contents and features found after the analysis of a large number of sites (assumed to be the "universe of discourse"). The “contents survey schema” (see Appendix) has proved to be a very efficient tool to describe in an homogenous and comparable way different museum Web sites and thus to evaluate them. At this stage of the research we have listed more than a hundred “elementary” constituents, organized into three main groups:

  1. site’s presentation: general information about the Web-site;
  2. museum’s presentation: contents and functions referring to a “physical museum” (like “arrows” pointing to the real world);
  3. the virtual museum: contents and functions exploiting the communicative strength of the medium.

A further analysis has allowed us to detect “high level” constituents such as, for example, collections, services, promotion, which gather the elementary constituents (a full account of all the pieces of the model can be found as an appendix to this paper).

Next job has been to define a set of users’ scenarios, as a way to build a library of suitable ATs. A user scenario, in this context, is a pair «user profile, operation (that users may wish to perform)». Therefore in a certain way the tasks are coupled to user profiles, in the sense that a given task may be interesting for a given profile, and/or meaningless (or irrelevant) for a different profile. When the inspector has to perform the inspection, he/she will learn from his/her customer who the intended users of the application are and will concentrate on those tasks likely to be performed by them. In any case he/she will be free to create new tasks that fit better the communicative goals of the application, as long as he/she follows the guidelines of the method and its “philosophy”.

2.1 User Scenarios

A Web site is an artefact devoted to communication: therefore we have a sender, a message, many addressees and a context of use. That is why the concept of user scenario becomes crucial: a scenario can be defined as the description of a concrete episode of use of the application, a “story” about use. A scenario describes possible/intended users performing actions with the application. Obviously, it is impossible to define all the potential scenarios of use: we here sketch, for the museum’s domain, some of the most typical scenarios, useful to evaluate the usability of most of the Web sites. The inspector, interacting with the most important stakeholders, should trace more specific scenarios.

It is possible to synthesize the concept of user scenario as follow:

User scenario = User profile + Abstract task/s)

By User profile we mean a description of the relevant features of those stakeholders that will interact with the services offered by the application. The description of user scenarios could have different levels of granularity: from generic to very detailed scenarios. However, a scenario should portrait the type of user, his goal and the task(s) necessary to achieve the goal. In order to sketch the user profiles, we took into consideration a number of variables, such as age, expertise, professional interests (e.g. school students, fine arts students, fine arts experts, tourists, etc.). Each relevant user profile is based upon a number of these variables.

Hereafter we present some plausible scenarios for museum Web-sites:

User Scenarios Details Scenario’s description
Tourist 1 Generic well-educated person. Interested in an in-depth visit of the museum, that he/she has never visited before. He would like to visit the real museum; therefore he needs to gather some general information (opening hours, ticket’s cost, etc.). Besides, he is very interested in the permanent exhibition and in the events’ calendar.
Tourist 2 Generic well-educated person. He will have a precise amount of time to spend in the museum (say, 2 hours), therefore he needs to know what shouldn’t be missed. Apart from the practical information (opening hours, ticket’s cost, etc.), he looks for information about the most interesting/famous/important pieces of the collections.
Tourist 3 Generic tourist, not particularly well-educated. He wants to have a fast tour of the museum to see the most famous pieces of the collections. Apart from the practical information (opening hours, ticket’s cost, etc.), he looks for information about guided tours and/or audio tours.
Tourist 4 A very well educated person, who already knows the permanent collections. He regularly visits the Web site to find information about all the cultural events organized within the real museum (conferences, concerts …), to plan a visit to the museum. He will look for the current exhibits and the events’ calendar.
Tourist 5 A very well educated person, who already knows the permanent collections. He knows he will be in town on a specific date, therefore he wants to know what special events will take place on that day. He will look for the current exhibits and the events’ calendar.
“Stray” site’s visitor He “bumps” onto the Web .site by chance. He may be intrigued by the idea of finding funny interactive games, e-cards, etc.
Student 1 High school student, interested on a specific topic (a painter, a work of art, a period, etc.) for a homework. He will search the collections, the list of authors, the list of works of art. He will look for long and detailed descriptions.
Student 2 Elementary school student, interested on a specific topic (a painter, a work of art, a period, etc.) for a homework. He will search the collections, the list of authors, the list of works of art. He will look for rather simple descriptions.
Teacher 1 High school teacher, planning a visit to the museum for his/her class He will search for educational material, information about the collections, information about guided tours for schools.
Art lover 1 Art lover, who regularly visits the Web site. He looks for new material being introduced in the site (the work of art of the month, for ex.).
Art lover 2 Art lover, who wants to support the museum. He would like to become a member of the museum. He is interested in the advantages of the membership. He looks for the membership form.
Teacher 2 High school teacher, who wants to enter his class to a competition organized by the museum. High school teacher, who wants to enter his class to a competition organized by the museum. He looks for the competition’s application form and for some information about the competition’s topic.
Journalist Journalist who has to write an article about the latest exhibition held at the museum. He needs to download the press release and to consult the section reserved to the exhibition.
Competitor 1 Director of another museum He looks for information about the general organization of the museum, the mission, the vision. He would like to emulate the strategies of the museum.
Competitor 2 Web-site manager of another museum He looks for information about the institution’s Web-site, to design its own site.

Tasks (concerning the content evaluation) for the domain of museum Web sites are divided into two categories as regards their “concern”:

  1. 1. Practical Information Tasks;
  2. 2. Knowledge Tasks

Examples of Practical Information Tasks are

  • PI 12: Find information about the accessibility for people with disabilities
  • PI 13: Find information about guided tours/audio tours
  • PI 17: Find information about special events (lectures, conferences, concerts…) on a specific date
  • PI 26: Understand if X hours are enough for an overview of the museum’s collections;

Examples of Knowledge Tasks are:

  • K 43: Find all the works of art dating back to a specific historical period
  • K 44: Find the biography of a specific artist
  • K 49: Find educational material to prepare a class for a visit
  • K 54: Get an overview of the permanent collections

2.2 Usability Criteria

As regards the list of attributes for the to be scored during the inspection, their generality (e.g. “richness”, “completeness”) certainly allows a wide use in domains other domains (if necessary, with some revisions).

  • A1 Authority: The author is competent in relation to the subject
  • A2 Currency: The time scope of the content’s validity is clearly stated. The info is updated.
  • A3 Consistency: Similar pieces of information are dealt with in similar fashion
  • A4 Structure effectiveness: The organization of the content pieces is not disorienting
  • A5 Completeness: The information required is complete
  • A6 Richness: The information required is rich (many examples, data…)
  • A7 Clarity: The information is easy to understand
  • A8 Conciseness: The basic pieces of information are given; texts are not too long and redundant
  • A9 Multimediality: Different media are used to convey the information
  • A10 Multilinguisticity: The information is given in more than one language
  • A11: efficiency: The task can be performed successfully

3. Examples

We introduce now a few examples of inspection that may help the reader to actually grasp how our method works. The examples are very simple, and are taken from actual Web sites. We hope that, in the period between the writing of this paper and the time for the user to read it, the Websites will not be modified (the impossibility of “freezing” Websites, in practice, makes it difficult to develop examples of inspection that could maintain their validity over a long span of time), so that the reader may try directly to “inspect” them.

strong (practical Info AT):

find the educational activities occurring on a range of dates in a real museum

The user’s scenario for this task is that of a family with two sons aged 5-10, living in the town where the real museum is actually located, between november and december 2002. They would like to know what activities for family will take place in november-december week-ends. We performed this task on many different Web-site, and we will describe here our finding for the National Gallery of London site and the Hermitage museum in St. Petersburg site on the basis of an inspection that took place on November 15th, 2002. The focus of our attention will be the section named “information about the museum activities and events” in our schema (see the appendix for the details). The relevant attributes that we will use for this brief example are the following:

  • (A2) currency of the information;
  • (A5) completeness;
  • (A7) clarity;
  • (A6) richness of the information provided (very important in order to make understandable the potential interest of the events).

In the home page of the National Gallery Web site we find two relevant links: “Plan your visit” and “Education”.
If we click on “Plan your visit” we get a wide choice of "information" links and between them "family visit": in these pages we can find all general information about National Gallery events and facilities for families and also a link to a calendar listing all the National Gallery public events and activities day by day for two months: a tool very effective and easy to use.
If we click on “Education” we get to "calendar" again and to a wide choice of activities that children could also perform on-line.
From a graphic point of view, the Web-site is very pleasant: in Web pages for children there are the drawings of the famous illustrator Quentin Blake.
On the whole, we can say that the information is well updated and exhaustive and that the task can be performed quickly and easily.
In the Hermitage Museum home page, we can find three relevant links in two different menu: "Children and education", "Information", "Calendar".
If we click on "Children and Education" we find the Web pages of the museum School Center, with a list of activities: it's not clear if these activities are offered for schools only or also for families; in any case, there is not any practical information, but only a general and vague illustration about museum educational activities.
If we click on "Information" section, and then on the subsection "visitor information" we get a short list of categories, among which we can click - as relevant for our purpose - on "tours and lectures". We then get only a page illustrating in a generic way these activities, without any practical information on-line.
Through "calendar" Web-pages we can get a list of main events only (exhibitions, festivals, ecc.).
On the summary we can judge the Hermitage museum Web-site as widely insufficient to perform this specific task: information is very poor and generic.
The table below synthesize our scoring and evaluation.

  A2 A5 A7 A6 Global score for this AT
Scores National 9 8 9 7 8.25 (just average score)
Scores Hermitage 1 3 3 1 2 (just average score)
Weights   0.3 0.5 0.1 0.1  
Weighted scores National 2.7 4 0.9 0.7 8.3("weighted" average)
Weighted scores Hermitage 0.3 1.5 0.3 0.1 2.2("weighted" average)

We do not ask the reader to agree with our scores (we may be poor inspectors) but to appreciate the method on a number of issues:

  1. a) We are evaluating a specific task and not expressing a global evaluation; in addition we are scoring each single attribute. This level of detail introduces two advantages: precision of the feedback to application designers and possibility of pinpointing the causes for possible discrepancies among different inspectors.
  2. b) Through weights we can take into account the specific objectives for the (portion of the) application. In the example above, we gave great relevance to attributes A2 and A5, and minor relevance to A7 and A6.
  3. c) Global concise evaluation can be obtained trough combining the evaluation for each attribute (as in the above table), and/or combining the evaluation for the different ATs (again using weights in order to attribute different relevance to each AT).
  4. d) Different systems of weights can be used in order to take into account different user profiles.

strong(Cognitive AT):

find all informations about a specific subject

This task might be performed by an art historian looking for information about a topic he/she is currently carrying a research on, let’s say the female portraits painted by John Singer Sargent. Some of Sargent’s works are kept by the Tate Britain Gallery of London and by the Smithsonian American Art Museum of Washington.

The relevant attributes that we will use for this brief example are the following:

  • (A11) efficiency: the search can be performed successfully and quickly
  • (A5) completeness
  • (A6) richness of the information
  • (A4) structure effectiveness

Using the Tate Britain Web-site, we have two choices: to use the search engine or to navigate the site. Writing the name “Sargent” in the search window of the home page we get a list of 102 records. If we enter the section "collections” we can use the tool “Tate collections”, and select one among different options. We can use the "artist search" tool, getting a long list of authors in alphabetical order: if we click on “Sargent” we find 45 matches, displayed with a thumbnail, title, date and catalogue number. We have also the possibility to save single items in a customized repository - " my selection". If we click on the title or the thumbnail we have, for each of the work, the basic data, a description and the possibility of zooming the image; for a lot of works it is also possible to read the catalogue description. We can also refine our search using "subject search" tool: by combining two subjects (there is a list we can browse), for example "group and movements" and "people", we can explore all female portraits of the 19th at the Tate. The Smithsonian Art Museum Web-site offers a similar functionality: a search engine in the home page. Inserting the full name “John Singer Sargent” in the form for generic search, we obtain a list of 75 Web-pages including this artist name (from museum catalogue cards to past exhibitions press releases); we can refine our search and compile the "artist name" form: we are given a list of 8 artworks. The description of the works includes only basic data and a list of keywords. The same results can be obtained by choosing the section “collection and exhibitions". If instead we search the section "art inventories", we can leave the Smithsonian American Art Museum Web site and search the Inventories of American Paintings and Sculpture (www.siris.si.edu). The Inventories - established by the Smithsonian American Art Museum - catalog American art in collections worldwide and provide information on over 335.000 artworks. Information is compiled from a variety of sources but images of the artworks are not yet available for online viewing. The result of a search in Inventories databases using the full name of the artist and the subject "female portrait" is a long list of 441 items with basic data of artworks and their references.

On the whole, we can say that both sites permits to reach a first set of basic information by using the search engines: if we want more specific data, at scientific level, we have to use other tools. Only the Smithsonian Web-site allows us to search a real database, but without any image (a significant restriction in artworks research!).

The table below synthesize our scoring and evaluation.

  A11 A5 A6 A4 Global score for this AT
Scores Tate 8 7 7 7 (just average score)
Smithsonian 7 6 5 6.25 (just average score)
Weights   0.1 0.3 0.2  
Weighted scores Tate 0.8 2.1 1.4 strong(just average score)
Smithsonian 0.7 1.8 1 strong (just average score)

4. Conclusions and future work

The general distinctive features introduced by MiLE can be synthesized as it follows:

  • Efficient combination of inspection and empirical testing
  • Use of Abstract Tasks, ATs, as guidelines for inspection
  • Use of Attributes, as a way to detail scoring
  • Use of Concrete Tasks, CTs, as guidelines for empirical testing
  • Use of weights, as a way to translate scores into evaluation
  • Use of user profiles in order to assign weights

The current work consists into identifying, through the user scenarios, the “universe of possible functions” that a museum Web-site should support, matching user profiles features with Abstract Tasks. The goal is to generate an overall schema, showing what type of user is interested in what information/action. The combination of user-profile/AT is what we mean by User Scenario.
We aim at providing a contribution to the community of people interested in museum Web-sites (museum curators, designers, Web managers, etc.), a part of a shared understanding about what it means to evaluate quality and usability of “virtual artifacts”.
Since the amount of work to be performed is outstanding, and we would like to generate a discussion in a large community, the authors strongly encourage all the interested subjects to contact them, in order to enlarge the scope and the validity of this research about evaluation.

References

  • Blackmon, M.H., Polson, P.G., Kitajima, M., & Lewis, C. (2002). Cognitive Walkthrough for the Web, in CHI 2002 Conference on Human Factors in Computing Systems. ACM Press
  • Brinck, T., Darren, G., Wood, S.D. (2002) Usability for the Web: designing Web sites that work, Morgan Kaufmann Publishers, Academic Press, London
  • Chi, E.H., Pirolli, P., Chen, K., and Pitkow, J. (2001) Using information scent to model user information needs and actions on the Web, in Proc. of the ACM Conference on Human Factors in Computing Systems, CHI 2001 (pp. 490-497), Seattle, WA
  • Chi, E.H., P. Pirolli, and J. Pitkow (2000) The scent of a site: A system for analyzing and predicting information scent, usage, and usability of a Web site, in Proceedings of the Conference on Human Factors in Computing Systems. Hague, Netherlands
  • Costabile M.F., Garzotto F., Matera M., Paolini P., Abstract Tasks and Concrete Tasks for the Evaluation of Multimedia Applications, presented at the Int. Workshop on Theoretical Foundations of Design, Use, and Evaluation, Los Angeles, CA, USA, 1998
  • Costabile M.F., Garzotto F., Matera M., Paolini P., The SUE Inspection: A Systematic and Effective Method for Usability Evaluation of Hypermedia, "IEEE Transactions on Systems, Man, and Cybernetics", Vol. 32, No. 1, January 2002
  • De Angeli, Costabile M.F., Garzotto F., Matera M., Paolini P., On the advantages of a Systematic Inspection for Evaluating Hypermedia Usability, "International Journal of Human Computer Interaction", Erlbaoum Publ. - In print
  • De Angeli, Costabile M.F., Garzotto F., Matera M., Paolini P., Validating the SUE Inspection Technique, in Proc. AVI, 2000, p. 143-150.
  • Di Blas, N., et al. (2002) Evaluating the Features of Museum Web Sites, in: Bearman D. & Trant J. (eds) Museums and the Web 2002. Selected Papers from an International Conference, Archives & Museum Informatics, Pittsburgh, USA: 179-185
  • Garzotto F. & Matera M., A Systematic Method for Hypermedia Usability Inspection, "The New Review of Hypermedia and Multimedia", vol. 3, pp. 39-65, 1997
  • Garzotto F., Matera M., Paolini P., A Framework for Hypermedia Design and Usability Evaluation, in Designing Effective and Usable Multimedia Systems, P. Jonhson, A. Sutcliffe, J. Ziegler Eds., Boston, MA: Kluwer Academic, 1998, pp. 7-21
  • Garzotto F., Matera M., Paolini P., Abstract Tasks: a Tool for the Inspection of Web-sites and Off-line Hypermedia, in Proc. ACM HT, 1999, pp. 157-163
  • Kitajima, M., Blackmon, M.H., & Polson, P.G. (2000) A comprehension-based model of Web navigation and its application to Web usability analysis, in S. McDonald, Y. Waern & G. Cockton (Eds.), People and Computers XIV – Usability or Else! (Proceedings of HCI 2000, pp. 357-373)
  • Matera, M., et al., SUE Inspection: An Effective Method for Systematic Usability Evaluation of Hypermedia, "IEEE Transaction", Vol.32, No. 1, January 2002
  • Pirolli, P. and S.K. Card (1999), Information foraging, "Psychological Review", 106, p. 643-675
  • Polson, P. G., Lewis, C., Rieman, J., & Wharton, C. (1992), Cognitive walkthroughs: A method for theory-based evaluation of user interfaces, "International Journal of Man-Machine Studies", 36, 741-773
  • Rosson, M.B., Carroll, J. (2002) Usability Engineering, Morgan Kaufmann

Acknowledgements

The MiLE work has been partially supported by the European Commission, IST project 2000-25465 “VNET5 – Advancing User Centered Product Creation in Electronic Publsihing.

We also wish to acknowledge the work of the other members of the Bologna group, who made this (still on going) research effort possible. We therefore warmly thank Dede Auregli (Galleria d'Arte Moderna di Bologna), Gilberta Franzoni (Musei Civici di Arte Antica di Bologna), Paola Giovetti (Museo Civico Archeologico di Bologna), Laura Minarini (Museo Civico Archeologico di Bologna), Federica Liguori (Politecnico di Milano), Carolina Orsini (Università di Bologna), Uliana Zanetti (Galleria d'Arte Moderna di Bologna).

   




TOP


Copyright Minerva Project 2003-04, last revision 2003-04-10, edited by Minerva Editorial Board.
URL: www.minervaeurope.org/events/documents/garzottobibliocom.htm
Valid HTML 4.0!