Skip navigation

Category Archives: collaboration

Somehow it managed to take quite some time to write this post. However if, after so long, I’m still interested in writing about the activities organized by the Center of Contemporary Culture of Barcelona Lab (CCCB Lab) and Citilab, concretely its project named Expolab, about cultural institutions and 2.0 practices, it’s a sign I’ve found many interesting ideas and reflections in them.

Using same title for talking about two activities isn’t very precise. In short, I can say that I’m referring to 2 activities: a talk about museums 2.0 and a workshop about practices 2.0 in cultural institutions. Although the 2.0  atribute was the leit motiv of the sessions, there was a big difference in the understanding of this concept.

The talk “Cultural institutions 2.0?” consisted of a round table discussion where representatives of several museums and cultural institutions explained and reflected about projects involving web 2.0 tools, lead by their institutions.  My personal impression is that, despite there are some interesting projects that really make an effort to give users a voice and promote participation, mostly web 2.0 tools are merely used as another channel of communication to attract more visitors, or simply strengthen the links with the existing visitors. The institution has the control and users are only allowed to participate in very specific ways.

The choice of technology in and of itself seems to explain and justify why these institutions identify themselves with the 2.0 label. Other usual 2.0 qualities such as transparency policies rewarding users for their contribution weren’t seen as relevant.

On the other hand, the workshop consisted in developing an understanding of the meanings underlying 2.0 practices, in other words 2.0 philosophy. The key aspect was the approach. Technology was a secondary element to take into account.

The purpose of the workshop was to share and put in practice new approaches towards creation, colaboration and continuity of the activities started in centers of creation and cultural divulgation. Communication with the public is important, but participation is an strategy that forces us to think carefully about interaction styles, shared creation, collaboration and broadcasting.

The workshop was conducted under a participatory design approach. Groups were made according to participants’ interests. After this, each group developed a project and built a 3D model (amazing how people enjoyed using plasticine – me included ;). Time was very limited, but was still time for quick  peer to peer revisions,  as well as a public presentation of each group’s projects. During the afternoon session, groups were asked to think about specific questions related to their project definition.

Obviously, the intention of the workshop was to generate questions and to open spaces for reflecting about 2.0 practices and participation, rather than offer answers. Maybe this is the reason why I’ve taken so much time for writing about it. Certainly I’ll need to read, think and learn more before I can stop thinking in participatory approaches (this means there will be more posts about this 😉


Since March until mid-April, the Advisory Board (AB) of the Horizon Report: iberoamerican edition has been working collaboratively, first through a wiki and later in a face-to-face meeting in Puebla, México, to select and identify those technologies, challenges and trends with a greater potencial throughout next 5 years in iberoamerican Higher Education.

The 14th, 15th and 16th of April, Puebla became the scenario where took place the final vote of the emergent technologies. Collaborative Environments and Social Media are the ones considered to have a greater impact in less than a year. According to the results of the vote, Open Content and Mobiles will be adopted in a time horizon of 2-3 years. Finally, in a long term (5 years), Augmented Reality and Semanteic Web are forseen as the main promises for education.

After the meeting in Puebla, a report with the list of selected technologies, examples of use, as well as main challenges and trends of High Education in Iberoamerica will be written. The final document is expected to be presented in the Summer Conference of the New Media Consortium that will take place at the beginning of June 2010.

The Horizon Report Ib. follows the same methodology as main Horizon Report editions. The process is structured according to Delphi Technique. It’s a very oriented technique in which participants have to answer a set of questions in order to identify those emergent technologies with more impact on learning, teaching and creative inquiry. Through a two round votations, the general list of emergent technologies is reduced to 12, and later to a short list of 6 technologies.

The Horizon report: Iberoamerican edition is an initiative of the eLearn Center, UOC and The New Media Consortium.

Some impressions

  • Despite the technological approach of Horizon Reports, discussions in the iberoamerican edition tend to focus attention on issues related with pedagogy and methodology of use. Personally, I was happy to hear those reflections. Probably, it’s impossible to develop a pedagogy before adopting a technology. However, institutions (and nobody in general) can fall in the trend of adopting new technologies just because “they’re cool”.
  • Process is important. Something I’ve learn from the Horizon Report: Iberoamerican edition is that questions not only guide but can also determine answers. What do we ask and why? The way the vote was organized had an important effect on the final selection of technologies. Delphi technique is interesting, but at some point it would be important to be more flexible. The same questions and methods doesn’t work for everyone.
  • Too much diversity in a single report. Talking about Iberoamerica is the same as referring to a huge diversity impossible to include in “the same box”. Personally, I feel it’s very difficult to take a picture that captures the implementation of emerging technologies in Higher Education in Iberoamerica in a single report. Giving voice to all parts is certainly a challenge.
  • The notion of digital natives is starting to loose strength. Mark Bullen would be happy, finally young people are not seen as a group of geeks who create fear among older generations. Possibly they’re more used to technology, but it doesn’t mean they’re more efficient in searching for information, collaborating, filtering information… in one word, learning.
  • Technology, alone doesn’t change anything. On the contrary, it can easily generate new dependencies. I don’t want to mean we shouldn’t adopt technology. Currently is part of our lives, so it’s necessary to develop competencies and a digital literacy. However, I wonder if emphasis shouldn’t be put on critical thinking rather than on the tools we use.
  • Some of the selected technologies imply values and ways of being completely opposite to the logic of capitalism. The idea of promoting collaboration and content exchange (through open content) is really exciting and promising. However, a mainstream adoption requires something else than simple access to technology. Are we ready for this?
  • Despite the final product being a report, there are very interesting materials, opinions and exchanges in the wiki of the project. Of course it can take some time to read it, but… it’s the best way to acquire a deeper insight of the project.

The balance of the meeting in Puebla was positive. It is true that many things need more discussion and reflection, but in general participants of the Advisory Board left the room feeling they had learned something. It has also been a starting point for the creation of a community of experts, from Iberoamerica, focused on the educational applications of emerging technologies in Higher Education.

Let’s see what happens, but at least right now future looks promising.

The idea that the school isn’t the only place where we learn isn’t new. In fact, in many of seminars I’ve attended lately, one of key ideas was the need of rethinking school and the type of learnings that students are supposed to achieve there.

Among critical voices towards how is organized formal education, the notion of informal learning seems to be something to pay attention to, or at least to give it a more carefull look. Briefly, informal learning can be defined as:

Informal learning is never organised, has no set objective in terms of learning outcomes and is never intentional from the learner’s standpoint. Often it is referred to as learning by experience or just as experience.

We are constantly learning, even if, at first, we don’t value the amount of time and effort invested in a certain activity, that’s to say, even all that learning remains invisible. Sadly, so many times it seems necessary to have a certification coming from a renowed center or institution in order to get some recognition. Now, some institutions, teachers and researchers are starting to question the validity of formal education as the only channel to manage learning, specially the one required in Knowledge Society.

At this point, the project headed by Cristóbal Cobo, Facultad Latinoamericana de Ciencias Sociales en México (FLACSO-México), and John Moravec editor of is proposed as an initiative to identify and recognize the value of all this informal learning that is kept invisible.

Invisible Learning is collaborative book (in English and Spanish) and an online repository of bold ideas for designing cultures of sustainable innovation.

In case you want to take part in this project, just have a look at

Does it make sense to talk about authorship in collaborative environment? Should all web 2.0 knowledge builders be anonymous? What’s the value of authorship?

These are some of the questions that started to arise after reading a post in zephoria’s blog. Here I copy the part I consider resums the key issue:

“Since Knol launched in beta, folks have been comparing it to Wikipedia (although some argue against this comparison). Structurally, they’re different. They value different things and different content emerges because of this. But fundamentally, they’re both about making certain bodies of knowledge publicly accessible. They just see two different ways to get there – collaborative anarchy vs. controlled individualism. Because Knol came after Wikipedia, it appears to be a response to the criticisms that Wikipedia is too open to anonymous non-experts.”

Collaborative anarchy vs. controlled individualism, is that what we should consider at the time of developping collaborative environments for knowledge building? Does authorship guarantee the credibility of a text, or any other material?

wikipediaObviously, wikipedia seems to be “the” Example of collaborative knowledge production. However, isn’t the critical mass of editors as well as other measures of control, a guarantee for information veracity? At this point it’s useful to take into account the following

“a controversial study by Nature in 2005 systematically compared a set of scientific entries from Wikipedia and Britannica (including some from the Britannica Web edition), and found a similar rate of error between them.”

Possibly, the next question I should ask myself is… What determines our level of trust at the time of evaluating information? Quite probably, in many contexts Britannica seems more trustful than Wikipedia when, from my point of view, we should keep the same levels of skepticism in both cases. I don’t know why, but it could seems that “collaborative anarchy” can easily get associated with chaos and lack of rigor. And really, after reading a bit about wikipedia history I’ve realized that information posted there is much more supervised and can be corrected more fastly than any other online encyclopedia. Obviously, scalability in collaborative knowledge production environments is a problem or, at least, a difficulty to overcome. However, if it succeeds it brings an additional value: the consolidation of a digital identity. We don’t know who are britannica redactors nor wikipedia editors, so authorship can always be a non answered question. At this point, I would say that, possibly, wikipedia can have a stronger digital identity than many other online encyclopedias.  Anyway, the issue behind authorship is closely related with responsability. Who will accept responsabilities (legal, economical…) in case someone feels offended by false information?

I don’t want to underestimate responsability in everything I/you can say, write, post or just reproduce, but I’m not sure if the solution is an economic or legal penalty. Wikipedia has develop its own mechanisms to avoid/solve errors and its corrections are the result of a public debate. This it’s more effective than simple posting a note accepting the mistake as many media do.