17 Mar 2009

My last post concerning cooperation  was not about any research that I would personally do, but highlighted the best and most useful results concerning the old question why there is cooperation that I have come across. I am studying Artificial Intelligence (AI). Today, I want to put on the shoes of an AI researcher trying to tell other researchers why research in cooperation is a good idea.

Cooperation as an AI topic
A natural question is: Why should someone in AI spent precious research money on cooperation? After all, to explain why there is cooperation is generally a domain of the sociologists and biologists, maybe economists. AI is based on Computer Science and while it surely should try out inspirations from other disciplines, its main purpose is still to build things that work in a new way (initial goals of building something entirely new have been refined). More and more (due to a lot of fruitless approaches and now also due to the economy), I hear the question why any approach would be useful to pay for. What can the new thing do or show that lets people do things better (for instance more efficient) than before?

I think that cooperation is a fruitful theme in this context and want to explain why.

Autonomous agents
An important goal for the future are intelligent agents that do tasks on behalf of us. We don't want to tell them too exactly what to do (it's work and we might give bad instructions) and want them to interact with each other in the world. This is why they need to act autonomous. They also need to be reactive in an unpredictable environment which for them mostly consists of the actions of other agents.
My simple point is that the notion of cooperation is a good tool to model this. First, actions of other agents can mean a lot of things to me. But if I was pressed to put it into really simple terms, I could label those actions as being good (cooperative) or bad (defective) for me. It's a simplification of the world, sure. But we have to start somehow*. With a modeling tool like the Prisoners Dilemma I can already model that agents are autonomous and depend on what other agents are doing. That's already some modeling effort covered, even in a way that researchers agreed upon long ago to be a standard method. I can still play around with some settings though, like the utility values or the number of involved agents per interaction.


Efficient Systems
When I say said I model cooperation, I mostly get looks that imply I'm being labeled a "Hippie". But in reality, cooperation means efficiency. Cooperation makes sense if the outcome of it is superadditive. This means that the utility produced together is more than just the sum of the utilities the agents would have produced alone **. It is also safe to assume that when one agent defects the other, he takes home a lot more utility, but the overall utility is still less than if both had cooperated. So for the system performance, it would be great if agents decided to cooperate often. This should be sold as a hard fact more often: Cooperation makes systems of autonomous agents efficient. It makes sense to do research with this goal in mind.
One thing about complexity and superadditivity: Not all interactions are superadditive, of course. But I think that when the behaviours of agents become more advanced and complex, their interactions in multiagent systems will be superadditive more often (since superadditivy often arises when noone is an expert for everything).


Multiagent architecture - Cooperation built-in?
Researchers come up with design guidelines for multiagent systems quite often. That cooperation is an essential design guideline for decentral systems with autonomous agents is sometimes already part of this. Take for instance the AMAS architecture. There is a subheadline called "Self-organisation by co-operation". They assume that each agent always tries coopreation and thus the system becomes efficient. I think that is a little simple. Take humans - for us cooperation is natural and feels right. But we don't have to cooperate. In fact, we often don't. If there are agents in the system that defect, then let that happen. Work around them, if you can. Defect them back if you deal with them or maybe see if they change behaviour.
Multiagent systems can - like nature - be heterogeneous and then it doesn't make sense to assume some behaviour for all agents.

Outlook
Cooperation is one way to start modeling complex interactions. There are simple models to use that everybody understand and we can agree that cooperation makes systems efficient. However, it is important to note how we mostly talk about the question if an agent wants to cooperate with another. Much more complicated is the how later on.
How do two agents cooperate once they both try? Can we make any general models for this at all? There is so much context involved and so much circular dependencies, so much source for uncertainty. I hope we can find a simple model for this that can be as usable as the Prisoners Dilemma.


* And we have to talk about it. To communicate about science is really important and to use simple and accepted models helps big time in this.
** You can nicely model such a situation with the Prisoners Dilemma.

#
You are seeing a selection of all entries on this page. See all there are.