17 Oct 2020

I bought a Fairphone3. I was hoping to quickly receive a Fairphone-supported Android again, which is Google-free. You know ? so there is not this giant cooperation getting to keep tabs on where I am at all times and who I talk/write to. Plus, it doesn't really feel like my phone, if the Google account is "connected" on there.

Anyway, Fairphone didn't actually want to go to the same trouble again as they did with the Fairphone2, so I had to wait. And now, for the first time, I actually "rooted" my phone. It's now mine again!

I used the /e/ foundation's version of LineageOS, which has a great mission (an actual Google-free phone[1]), a good tutorial for my phone, a nice roadmap and they support my exact phone model.

After two weeks of using the new OS and apps, I can say I am quite happy!

 

How I'm using it

Here are some points to make on what the /e/+LineageOS approach is and how I'm using it:

  • You don't need an /e/ account. It's only a service they offer, should some peple like to backup mails and contacts between phones or computers. I have Nextcloud for that.
  • The /e/ foundation added the MagicEarth app, so there is some decent Navigation based on OpenStreetMaps data. I have to say it works quite well!
  • /e/ comes with their own app store, which I like. It has apks from popular non-free apps, should you want/need them, like Whatsapp or Netflix. The source for these apk files is cleanapks.org. That is still a hot topic in the free software world. You can install other stores of course, like FDroid (completely free) or Aurora (niche apps from Play Store). Updating apps works well in all of those.
  • They don't give you too much control over what's on your home screen (app launchers, widgets). You can't delete the standard ones, so you have less space for the ones you care about. The solution is pretty cool, once you know it: You can install a different Launcher. I used Launcher<3. Now I have space for the two widgets I love: AgendaWidget (installable from the app store) and TaskWidget (comes with openTasks).
  • The old gang for supporting my Nextcloud integration on Android is working better each time I set up a phone: Nextcloud app + DAVx5 + ICSx5 + OpenTasks
  • Other notable base necesseties, while we're at it: AnySoftKey, KeepassXC, WebTube


What is the future for /e/? Apparently, they're working on creating a more aligned Look&Feel, so that standard apps they offer look more alike. I'd like that. For example, the Mail app is a fork of K-9, but got a clean modern look. However, the version of the fork is rather outdated, I hope they catch up.
 

Drawbacks

I don't want to leave the impression that not going with the mainstream is all roses. It is actually a small sacrifice - next to doing your own OS installation (which is actually not really hard but it will take you at least an hour, and then you start with installing and configuring apps).

Here is a list of annoyances I have so far:

  • Typing with auto- correction is okay but I know it's a bit worse than cutting edge. More mistakes slip in.

  • Webtube's full-screen mode is not full screen.

  • MicroG keeps crashing and I don't know what that means for my OS functionality

  • My phone would like to connect to my ChromeCast but it can't.

  • Text-to-speech is a while away but somebody's working on it.

Actually, all or most of these issues are being worked on. It's a matter of time.

Living somewhat free of giant corporations means you'll be living in the slight past. Maybe two or three years beind. It's really not that bad, but I can see that not everyone feels like they can do it.

 

Trust

 

A big issue in this whole story, and also a complex one, is trust ? we are here because we don't trust Google or Apple to be good stewards of all our data for all time. But we can't do it alone. We have to trust the makers of our alternative tooling to some degree, as well. In this situation, I am putting various levels of trust in the makers of LineageOS, the /e/ foundation and (by extension) cleanapk.org. But I'm also trusting app makers like the guys at KeepassXC. Some of the trust here is easier to maintain with the monopolists ? we're trusting Google and Apple (at the moment) to have a good eye on the security aspect of their app store.

It's complicated.

 



[1]  /e/ uses MicroG as a free and open-source replacement for Google Play Services, and Mozilla Location Service for geolocation. One of their goals is to support beign Google-free for non-technical users. They even offer smartphones (like the FP3) pre-installed for sale.

# lastedited 18 Oct 2020
28 Aug 2018

Software is a peculiar thing. It is made by engineers, but how to estimate costs and how to understand the end product is very different from other engineering sectors. This difference comes from the level of complexity that comes with even reasonably-sized software. The digital age just began. It might be a while until we can estimate and understand software projects, if ever.

This paragraph above is true and well-understood in the software industry by everyone who is paying attention. I refer you to two recently-published and very well-written articles about this. Every layman should be able to read them. They look at this problem from two different angles - the estimation of what reasonable costs and the expected stability of delivered software should be, and the understandability of software, especially when data science and machine learning are being used.

The Age Of Invisible Disasters:

Nobody dies on a failing CRM project. At least not so far as I know. So why should you care? Everyone still gets paid. You should care because within government, the only sector where failure gets any real coverage, I count £20b in failed IT projects over the last decade alone. Because within publicly quoted companies such failures are a bonfire of shareholder value. Because the ability to deliver technology projects will become the determining survival factor for many companies. Finally, you should care because if there is a skills shortage (or as I prefer to say, a talent shortage) and talent is tied-up with moribund IT projects, the lost opportunity cost to business is vast.

The terrifying, hidden reality of Ridiculously Complicated Algorithms:

If, as Jure suspects, machine judgement will become measurably better than human judgement for important decisions, the argument for using it will only grow stronger. And somewhere in that gap between inputs and outputs – the actual decision making part of the process itself – is something that can shape our lives in meaningful ways yet has become less and less understandable.

So - both estimations and understandability of software are currently really poor. That makes this a weird time to be involved in the software sector. It makes it exciting to predict the near future. But also, it makes it highly costly for businesses to do any software development. It isn't, however, stopping the digital advancement. Information is power, and who gets it right, can thrive.

Are you getting this angle from the software developers or software companies you talk to? I'm guessing the answer is no. I'm guessing they are telling you that your software project will certainly succeed, that maintenance will be no big deal, and so on. I get it, as framing the planning of software projects like I and the two articles above did, might not be the best marketing approach. This is business, after all.

But if you want to know if your software developer actually understands the situation, ask them about software complexity. Put the finger where it hurts. They either should somewhat admit to this weird state of their sector, and maybe tell you how they try to keep complexity at bay, or they are overselling and you should politely decline their services.

# lastedited 28 Aug 2018
22 Oct 2017

Fairphone is making more steps to its goal of modular phones which are used longer (instead of, for instance, buying a new phone because the camera is a bit outdated). I have a (almost) two-year old Fairphone2, and making photos and videos is one of its most important functions. Thus, I bought the new camera module.

The new module upgrades from 8 to 12 megapixels, but there is much more to a camera than the pixel count.

Here I'll show a few before and after pictures for any Fairphone 2 owners who are interested. I'll show original first, then the new camera.

First picture: Objects in fall afternoon sunlight, no special instructions

Objects in fall afternoon sunlight, no special instructions

Objects in fall afternoon sunlight, no special instructions (new camera)

The original camera did make some odd choices when making the picture, especially with lightning. The new module got it much better. This is also a reminder that a camera is hardware, but it needs software making good decisions.

Second picture: Objects in fall afternoon sunlight, focus on the sunflower

Objects in fall afternoon sunlight, focus on sunflower (original camera)

Objects in fall afternoon sunlight, focus on sunflower (new camera)

When I focused on the sunflower, the original camera made a better picture. The new camera, however, gets more out of the scene. The table isn't as dark, and the colours are generally more natural.

Third picture: Freezer magnets on slightly reflective surface

Freezer magnets on slightly reflective surface (original camera)

Freezer magnets on slightly reflective surface (new camera)

It's less obvious, but I think the new camera gets details like natural colours and reflections noticably better.

I also made a comparison picture in a quite dark setting, and both cameras made really different choices. The old one caught too much light, the new one slightly too little. I installed OpenCam to see what difference another camera app made, and that result was much better.

I recommend getting the new camera if photos are important to you (e.g. you make 90% of the family pictures with your Fairphone2 and you believe 45 EUR are worth catching scenes slightly more naturally) and also to not underestimate what a different app could do, as well.

# lastedited 22 Oct 2017
18 Sep 2017

I was inspired by a call to write members of the European Parliament about upcoming legislation. Christian Engström writes in this post that self-written emails have the greatest effect, so I picked the two topics still being discussed and wrote about those.

Here is the text of my email:

Dear delegate of the JURI committee,

I am writing to you from The Netherlands as a citizen who is very concerned about internet legislation. As a computer programmer and father of two, I am following the current debate about the future of copyright with great interest.

I want to focus on two articles which I find so troubling that I am writing you today. These are the topic I discuss among my peers and which I raise awareness of: Article 3 (copyright exception for the modern research method Text and Data Mining) Article 13 (Automatic upload filtering).

Article 3:

I urge you to not only give the right to mine data to "researchers".

* We need journalists to have this opportunity, as well. In "big data" the important revelations can be found which we direly need.

* Also, big companies will find their data troves or already made their own. It stifles innovation if small, young companies are not allowed to find value in large amounts of data.

Article 13:

I urge you not to mandate that we hand over to machines to decide what can be uploaded or not. Machines cannot do this task, because they do not understand the context (e.g. satire) and uses covered by exceptions. But even more, this is

1) a censorship-like scenario, in which companies will rather block too much than being accused of not blocking forceful enough.

2) working towards centralisation of publishers, since implementation of such blocking behaviour is expensive and thus favoures the bug players. It will create even more of an oligopoly than we already have now.

Sincerely,

Nicolas Höning

#
10 May 2017

In which I show my solution for listening to audio and watching videos at home in the digital way, with an acceptable quality (for most people), but for a very little costs and almost no hassle getting it to work.

Here's the short rundown on requirements I had and maybe you also agree on (which in turn means you might want to read on):

  • All digital (sound and video files & streaming services)
  • hassle-free setup - this is for the bulk of people who like me, are not in for weeks of research or tooling
  • very low costs
  • no vendor lock-in & use of standards where possible
  • acceptable (not necessarily the world's greatest) audio quality
  • remote control for everything (by mobile phone or actual remote controls)


And here is what what I actually use:



PlayOn Harddisk Media player                     40 EUR
Google Chromecast                                        39 EUR
Raspberry PI as music box                          ~70 EUR
Envaya Bluetooth Audio Player                   140 EUR
--------------------------------------------------------------------------
                                                                        289 EUR

 

Why? - So here's the problem

Listening to music and watching videos at home. Everyone (in a modern household) wants to do this, of course. These days, however, the number of choices how to do it is vast, to the extent that researching what to buy can take forever. You can also built and configure a lot of things yourself, for instance using a raspberry pi and a custom-made luxurious soundcard. I understand that there were tough choices to make 30 years ago, as well, like how large the TV should be and if gold cables are really needed to get everything out of your audio signal.

I believe that today the number of sources for media has increased (and I'm only counting the digital cases for me anymore, really), because next to playing sound and video files from your hard drive there are now a number of relevant streaming services you might want to use.

On the hardware side, a lot of devices are on the market now, which all do something to play audio to you, in varying locations and quality. In fact, it's a young market with a lot of companies trying to develop exactly what we need. It's exhausting to do research there!

A lot of these new solutions are designed to lock you in and to upsell, something Apple is really good at. They want you to buy something wireless for every room in your house and also for on-the-go and what started at 200 EUR ends up at 1300 EUR and you need to keep using this system from now on.


The (my) solution

So I decided to lower my expectations with respect to audio experience, otherwise I would never install something modern. Otherwise, I believe I got pretty close to my requirements. Let's list what I bought:

Harddisk Media player

This little device plays almost any movie file you give it and it comes with a remote control (not pictured). I like USB sticks, which I can place there, but you could also connect a computer network cable and read files from some place in your LAN. I bought this a few years ago for around 90 EUR I believe (it's out of stock now), but nowadays the devices all seem to cost around 40 EUR. Connect to your TV via HDMI and it works.

Chromecast

Google makes this nifty little device which connects to your Wifi and then lets you show most of the content your phone screen would show, but streamed fromt he Wifi via the Chromecast to a connected screen, i.e. your TV. Your phone becomes a remote control. For streaming a few things which are well-supported (Youtube almost all of the time to be honest), this works really well. You have to help ot find your Wifi but that is very nicely done. Its usability breaks down though if you are streaming content via an app that is not supporting it or if you do not want the whole Google app stack on your phone (like I'm trying to these days), so I'm eagerly awaiting more innovation in this gadget area.

Envaya Bluetooth Audio Player 

The audio speaker is the area where one can lose the most time and money. I settled for a bluetooth speaker I can put anywhere I want, has decent reviews about its sound (not great, but reviews from people with better ears than me -audiophiles- are tiring), good bass, an AUX in and can even charge your phone. You could pay less money here, but it is something you'll use a lot. Using the AUX in, it can even give your TV watching experience a boost. This works out of the box.

Raspberry PI as music box

Streaming music to the speaker from the phone or laptop has its limits. It really occupies your phone for example, and if you want to play files from a hard drive you cannot reach that from your phone. So one might want to use a music server, which can be controlled from laptop and phone. Enter Pi Musicbox, a ready-to-use open source software package which can run a RaspberryPi for this purpose. Basically, it runs Mopidy, an open source music server oftwear, but neatly wraps it for perfect fit on RaspberryPi. The nice thing about Mopidy is that there a many mobile apps written for it, so you can control the Pi Musicbox nicely from a mobile device. Pi Musicbox plays files but also streams web radio stations and gives access to streaming services (though I cannot vouch for the Spotify support yet, still trying to get the most out of it - however, I still can use my phone here). By the way, the 70 EUR I listed above roughly cover the Raspberry PI (about 40 EUR) plus necessary extra things like an SD card, a case, power supply and a 32GB USB stick.
    This gadget is the only one requiring a little work, but not much, as you can see:
    1. put the Pi Musicbox image on a SD card
    2. edit the config file so it finds your Wifi when it boots
    3. put in the USB with your music
    4. boot
 

# lastedited 10 May 2017
10 Mar 2017

Algorithms should be made open ("transparent") to the public. I doubt, however, that we will live in a world where you can demand an explanation of what an algorithm did to you.

So, algorithms are a hot topic now. They change our world! You might remember how Facebook has an algorithm which manages user's news feeds, but it performed very poorly and kept trending fake news. Sad! Or the current discussions about self-driving vehicles being a danger to our safety - or even a danger to social peace because millions of driver jobs are endangered.

Let me make the side-note here that in both of these cases mentioned above, and many others which are similar, the awareness might be new, but the fact that algorithms do crucial work in our newsfeeds and our car safety systems is actually not that new actually. As a good example for this take banks, who have been using data mining algorithms to decide who should get a mortgage decades ago.

Lots of demands, but few incentives

However - the conversation is starting on what we should demand from entities (companies, governments) who employ algorithms w.r.t. the accountability and transparency of automated decision-making. The conversation is being led by many stakeholders:
 

  • Think Tanks: For example, Pew research just published a report, in which expert opinions around this question where gathered: "Experts worry [algorithms] can (...) put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment.".
  • NGOs: With Algorithm Watch we now have the first NGO that dedicates itself to "evaluate and shed light on algorithmic decision making processes that have a social relevance".
  • Governments: The EU General Data Protection Regulation (GDPR, enters into law 2018) has a section about citizens having "the right to question and fight decisions that affect them that have been made on a purely algorithmic basis".


There are many things being said, and I find this discussion more vague the longer I listen, but maybe the one specific demand all these stakeholders would get behind is this: Explain how the algorithm affects my life.

So it seems like a lot might be happening in this direction, but I believe very little will happen. Why?
 

  1. It is very costly to develop this feature (of explainable algorithms). I'll discuss why that is in more detail below. In fact, this feature is so costly, making it mandatory might actually drive smaller software development shops out of the market, as only the big players have enough manpower to pull it off.
  2. Customers will not demand it in everyday products, no matter what think tanks say. Open source software is not demanded by customers - it is being used because engineers love it.
  3. It will only become the norm in a specific type of algorithmic software if the product actually needs it. Consider a medical diagnosis software - the doctor needs to explain this diagnosis to the patient. Or the mortgage decision example where I believe lawmakers could step in and make banks tell you exactly why they will not give you a loan.
     

Explanations: decisions versus design

Algorithms should be "explainable", they say. However, there is no consensus to what that actually means.

In the light of complexity, I believe that what is explainable is not well-defined - while many people talk about explaining individual decisions, all we can really hope for in most cases is explaining the general design - which is much less helpful if our goal is to help people who have been wronged in specific situations. There can and should be transparency, but it might be less satisfying than many people hope for.

A new paper by Wachter et al (2017) makes a good point here that I agree with. They say that many stakeholders to the EU General Data Protection Regulation (see above), have claimed that "a right to explanation of decisions" should be legally mandated, but is not (yet). Rather, what the law includes as of now is a "right to be informed about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems". They also make a distinction between explanations for "specific decisions" versus "system functionality", and they state that the former is probably not "technically feasible", citing a book explaining how Machine Learning algorithms are not easy to understand.

I agree with their doubts here. Let me shortly dive into the feasability problems with explaining specific decisions and also broaden the scope a bit. To me, it comes down to complexity which is inherent to many IT systems that are being built these days:
 

  1. Complex data: Let's look at the Facebook news feed example: There is a lot of data going into this algorithm. The algorithm makes thousands of decisions for you while you surf and to fully explain one of them at a later time, you'd need a complete snapshot of Facebooks database at the very second the decision was made. That is unrealistic.
  2. Complex algorithms: Many algorithms are hard to understand for humans. Maybe the prime example here are machine learning algorithms. These algorithms are shown real-world data, so they can build a model of the real world. They use this model they built to make decisions in real-world situations. The model often is intelligible to human onlookers. For instance, a neural network (which is for instance used in Deep Learning) reaches a decision in a way that the engineers who "created" it cannot explain to you because the algorithm propagates information through this network many times over because it links back from its end to its beginning (so-called "back propagation").
  3. Complex systems: Finally, many algorithms might actually not live inside one computer. You might interact with a system of networked computers, which interact and all contribute their own part to what is happening to you. My favourite example here is a modern traffic control system that interacts with you while you travel through it, and at each intersection you encounter a different computer. I actually argued in a paper I published in 2010 that decentralised autonomic computing systems tend not to be "comprehensible".

So I believe (and agree with Wachter et al) that the demand to explain for any given situation how exactly the algorithm made a decision, is hard to hold. What can be explained is the design of the algorithm, the data that went into its creation and its decision-making and so on. This is a general explanation, which might be useful to explain to the public why an algorithm is treating problems in a certain way. Or it might be useful in a class action law suit. But it cannot be used to give any particular person the satisfaction of understading what happened to them. That is the new reality which probably already exists, but it has to sink in.

The notion of complex, almost incomprehensible algorithms has been brought up previously by the way. I want to mention Dr. Phoebe Senger, who noted that software agents tend to become incomprehensible in their behaviour as they grow more complex ("Schizophrenia and Narrative in Artifcial Agents", 2002), and of course the great Isaac Asimov who invented the profession of "robot psychologist".
 

 

# lastedited 11 Mar 2017
14 Jun 2016

I dragged it with me after my contract ended in 2014, but I actually made a finished product out of my dissertation after all and defended it at the TU Delft this past May.

It was a pretty formal procedure as you can see, but quite a meaningful end to , and actually a fun day in the end.

The dissertation itself is available here officially, but also hosted by me. I'll post the propositions here:

1. Both the need for low computational complexity of bidding and for effective capabilities of planning-ahead can be addressed in a market mechanism for electricity, that combines the trade of binding commitments as well as reserve capacity into one bid [this thesis, chapter 3].

2. In settings where a uniform price changes dynamically over time and where these dynamics are influenced significantly by consumer behaviour, the ability of a consumer to comprehend price patterns increases if a large part of the other consumers reacts to price dynamics in a manner similar to how he himself reacts to them [this thesis, chapter 4].

3. Dynamic pricing for electricity can effectively reduce consumption peaks, also under the two conditions that the retailer promises an upper limit on prices and designs his pricing strategy for profit maximisation [this thesis, chapter 5].

4. A heuristic control strategy for a battery which is limited in capacity can be designed such that it has the following three advantages: it reacts fast, it can reduce overheating of a connected low-voltage cable significantly and (if prices are dynamic) it can partly earn back the acquisition cost of the battery by performing revenue management [this thesis, chapter 6].

5. There is not one silver bullet to the problem of how to manage a smart grid in the most efficient way. Each setting has its own requirements, given by its own set of stakeholders and design objectives.

6. To have a healthy and happy toddler is not to a small degree a matter of luck.

7. For the foreseeable future, concerns about privacy need to focus on computers and mobile phones, which directly expose political views and social contacts of their owner, rather than smart meters, which expose less meaningful data.

8. If users do not comprehend the reason why a novel technology interacts with them in the way it does, it will not be adopted, even if it is useful and resource-friendly.

9. Electricity grids are the largest man-made synchronous machines, and economies are the most complex man-made systems. To combine them leads to much more complexity than is commonly assumed, and the resulting systems will therefore never be completely understood.

10. In a referral network, where agents base their opinion about the performance of a service on those of other agents, it is beneficial for users if the agents forget old information at a comparable rate. [N. Höning: "Discounting Experience in Referral Networks", Master thesis, Vrije Universiteit (2009)]

I also was asked to write a very short summary, which might be useful here:

New developments require us to reconsider how electricity is distributed and paid for. Some important reasons are renewable energy, electric vehicles, liberated energy markets and the increasing number of smart devices. How we deal with these dynamics will affect important aspects of the upcoming decades, for example transportation, home automation, heating/cooling & climate change.

In order to keep the security of supply high and price fluctuations within acceptable ranges, we need to continuously make the decisions who will supply or consume electricity, at what price and at what time. The resulting complexity should not grow too high for small participants, otherwise novel technology might not be adopted. This dissertation contributes market mechanisms and dynamic pricing strategies which can deal with this challenge and reach acceptable outcomes in four relevant problem settings (mostly situated in lower levels of the electricity grid).

The most critical problem to address are intervals with very high power flow, or with high differences between demand and supply which need to be evened out. Such “peaks” can result in steep price movements and even infrastructure problems. We study decision problems that will arise in expected scenarios where peaks reduction becomes important. In order to arrive at an efficient and usable system, this research specifically looks into

  1. Encouraging short-term adaptations as well as enabling planning ahead (of generation and consumption) within the same mechanism.

  2. Ensuring that small and/or non-sophisticated participants can still take part in mechanisms.

  3. Letting smart storage devices contribute to network protection.

We develop agent-based models to represent expected settings and propose novel solutions. We evaluate the solutions using stochastic computational simulations in parameterised scenarios.

A similarly high-level overview was given by me in a short presentation before the defense.

# lastedited 15 Jun 2016
08 Dec 2014

Last week I attended DockerCon Europe 2014 which luckily happened in Amsterdam this year. I got my finger on the pulse of important developments in the ongoing evolution of the internet and a just-healthy dose of tech-optimism from current Silicon Valley prodigies. I thought I'd share my five favourite slides with a little comment on each.

The internet is still a technological Wild West. So many talented people. So much change and progress in technology each year. So many things you need to know to deliver something that doesn't break for some reason. So much still to achieve. Docker provides a usable (fast and well-documented) way to bundle into a container some things that you know are working, then upload this container on any server and expect this functionality to be up and running and simply work as you expect. It wasn't a DockerCon talk, but I like this short breakdown of what holds us back and why containers can help a lot (9 slides). At Softwear, we are using Docker both in the CI workflow and partly in production (by the way, let me know if you want to do influential UX or QA engineering work for us). The slide above shows the way of thinking going forward - build stacks from things you know will work and will also work together. Like Lego. Then make these light-weight stacks (your actual web applications) work together in creative ways. The current term for a picture like this is "Microservices". This slide from Adrian Cockcroft (who spent six years at Netflix) makes the point how useful Docker will be in more detail. Adrians presentation (all slides) probably generated the most food for thought. My favourite line (slide 26) is:

DevOps is a Re-Org!

meaning that software developers are taking over system operator/admin - tasks in any company which does not actually run data centers (which is becoming a very concentrated business nowadays).

Next, we get to some Silicon Valley - style notion of how suspected technology breakthroughs will change society. Docker Inc. CEO is asking here:

What happens when you separate the art of creation from concerns about production & distribution?

Subtly, there is a picture of the printing press. He wants to say that creators of web applications soon might need to worry less about how they will deploy their app such that it will work, as the "container revolution" will make this trivial. Of course, the web 1.0 kind of already did that for content. However, I can see how lowering a crucial technological barrier for inventing useful web applications can really be significant to innovation. We have a lot of content and ways to get it out there, now let us see what cool applications can be built to assist people everywhere in the world. And although we at this conference were a bunch of rich white males, poor people are hopefully getting access to cheap smart phones soon (Africa is a good example). I, too, find these times exciting. But I was glad to return to my normal life, and to cool down a bit.

This slide gives an indication of the scale of change we are seeing in the software world. Henk Kolk from ING told us how this large bank sees itself as a technology company now and removed everyone from their large IT team who can not program at least something. Being a programmer means being in demand right now but as his slide says as well, speed is key from now on. If you don't get on board with this new way of having tight control over your stacks, together with being totally flexible towards switching technologies, all you will be doing is to jump from one sinking ship to the next. I got both excited and chilled, actually.

An interesting take away of the current weeks is how Open Source currently works. Big money has gotten in on it, because in the software world, you have to invest in widespread and sustainable technology while also having a modern stack. This only works when an open source community carries the technology. Even Microsoft is coming around in major ways. Companies are actually employing the best open source programmers directly to stay on top of things. The industry is a bit different than other industries in this regard (hopefully actually leading the way). On the slide above, Docker Inc. CTO Solomon Hykes is giving us his current set of the rules that he thinks make a technology successful these days. As a consequence, Docker got some interesting new functionality (announced on the first conference day), but it was kept out of the core code - "Batteries included, but replacable".

But it is also not all agreement and happy collaborative coding. No, sir. The latest trend is that a company or a startup guides an open source technology. This makes progress faster and stable, but it can easily break if you annoy the comunity. Node was just forked, AngularJS is having a community crisis. The Docker community is also weary of the Docker startup Docker Inc. In fact, Solomon Hykes spent a lot of his time on stage at DockerCon Europe 2014 to discuss how he wants to succeed as a steward of the Docker technology, using a process he calls "Open Design" (see all slides here). There is an Open Design API through which all feature requests have to go, thus separating people acting on behalf of Docker Inc. from people acting on behalf of the Docker Open Source project - no matter which company pays them at the moment. They are creating and updating their own constitution which deals with this construct as we speak (of course in structured text files, so if you suggest a change, you submit a pull request). So the message of this slide above is simple and compelling:

The real value of Docker is not technology. It's getting people to agree on something.

Replacing "Docker" with any standard, this is something you could also have said during any time of rapid development and change. Interesting times.

P.S. There were some really smart people at this conference, building amazing companies and systems. We can expect to see a lot, e.g. from the Apache Mesos project. I could have chosen more technical slides for this list, but it would have taken me longer to explain why I fancy them. A lot of them were also quite intimidating, actually.

# lastedited 07 Jun 2015
25 Jun 2014

Today, I gave a talk at CWI on how to become more efficient with complex computations in our scientific work. I discussed how I have approached the need for distributed computation (to scale up towards larger and more complex problems) and the problem of organising the scientific workflow when doing experimental work.

I promised listeners that they would get

  • hands-on information on getting results from large-scale, "embarrassingly parallel" computations,
  •  ... without actual parallel programming,
  •  ... little ssh effort
  •  ... and using the programming language of your choice.
  •  Plus, some tools to keep track of experiments and data.

where I would (not exclusively) mention the tools I have written, StoSim and FJD. I was happy to have a turnout of 16 people and I think we spent an interesting hour.

Here are the slides (direct link to PDF):

 

# lastedited 25 Jun 2014
27 Mar 2014

I'm quite happy with how the visits to this website of mine have developed over the last year. Here are the monthly numbers:

Visits to nicolashoening.de from April 2013 to March 2014

Btw, since March 2013 I use Piwik for my visitor analytics and am very happy. You should also be happy about that, because I don't store your metadata on Google's servers, only on mine.

An average week looks like this:

One week in visits

You can tell that people mainly come to my site on workdays, weekends are rather quiet. Why is that?

Well, in 2007 or so, I wanted to have javascript tooltips, or "popups" to display context to links when you hover them. I wanted to style them like I wanted. So I wrote a small script. It is very simple but the page explaining it creates almost all trafic to this website. Look at this example of page view numbers ("page views" are the amount of loaded web pages, where "visits" consist of one IP address performing one or more page views in one session) from (I think) one week:

The trend is clear. Around 300 people come to that page about the little javascript thingy every day, and not much else I write gets attention. And as you can tell from the long list of comments there, many people use this javascript thingy in the websites they build. I actually get some satisfaction in making them happy, so I answer many of their questions and actually improve the codebase once in a while.

But here is the problem: There are many other similar scripts for this out there, and I never get mentioned when experts list libraries for such a feature (I have been mentioned in two or three forums, I think). Why do people keep finding this? I think the dirty little secret is that I called it a "popup", while the technically more correct term is "tooltip". Look at search queries that people used when they came to this page:

There you go. Me and a significant amount of people use the slightly wrong term, and that's what drives traffic here. Accidental linguistic match-making in cyperspace. Positive things come from this. The traffic probably improves my Google ranking a lot. Our interactions help my users get something done and make me feel some fulfilment. It's a weird world.

Speaking of the world, here is where my visitors are from Mostly english-speaking countries where a lot of web development happens:

# lastedited 27 Mar 2014
21 Sep 2013

The European Commission just announced a new indicator to measure innovation in its economy. I think it shows how the bureaucrats favour big companies, want to make it easy for themselves and hold a simplistic view of how innovation leads to positive net effects for the economy.

To quote, here are their new ingredients to the indicator:

  • Technological innovation as measured by patents.

  • Employment in knowledge-intensive activities as a percentage of total employment.

  • Competitiveness of knowledge-intensive goods and services. This is based on both the contribution of the trade balance of high-tech and medium-tech products to the total trade balance, and knowledge-intensive services as a share of the total services exports.

  • Employment in fast-growing firms of innovative sectors.

  1. First, the view that patents are important favours big established companies over small innovative ones. Small innovative software companies are often better off to just move on and not waste their time trying to get a patent. And don't forget that (luckily) in Europe it is difficult to patent software. So comparing to the rest of the world here is even more questionable. Patents are also often used by companies for no other purpose than to block out other companies, which is maybe good for Europe (only if a European company bocks out a non-European one), but it is hardly helping substantial innovation in the economy to form. Another point: I think the EU has the view that innovation has this standard way to lead to positive impacts for everyone: Someone (a big company, see above) invents something, then they patent it and then hire people to develop the invention. As they start producing, they will hire other companies as subcontractors, and so the positive effect trickles down from the inventors to the others. I think that this sometimes happens, but only for big companies. This view is neglecting large parts of the economy. Innovation often happens in non-formal open spaces and/or in collaboration.
  2. And about the second ingredient - simply hiring people in "knowledge-intensive activities" gives you points, no matter their actual effects on innovation. That makes it easy to count, but what are we measuring? Are we measuring that these employees are doing something useful with their knowledge-intensive work or are we simply celebrating that people are getting paid to think? For example, I'm sure the banking and lawyer industry has lots of jobs that would count as knowledge-intensive. A reason to celebrate them? Also, government spending on research gives a country points here, no matter what is actually researched.
  3. The third ingredient is the only one that makes at least some direct sense to me. One can compare the contribution of high-tech and high-knowledge activities to other services and get some information out of it (albeit my point about the second indicator holds here, as well).
  4. The fourth ingredient is based on the assumption that fast-growing companies must be more innovative than other ones. Might sometimes be true, but often probably not. Growth != innovation. A dangerous way of thinking in my opinion. Most often, rapid growth is merely the ability to attract capital - capital that is interested in rapid returns. The substance below the company is indirectly related to the jope on rapid returns, but often not necessary for this capital attraction effect (as the last couple bubbles should have tought us). If your concept of innovation is to enable turnover, it has not been formulated in society's best interest.

I think that behind this formalism there is a dangerous fan-boy attitude toward big companies/big finance and telling the story that they like to tell - an effect of lobbyism. What about small companies? They'll have an even harder time being "sexy" to EU bureaucrats now.

There is so much more happening in an economy, which is one of the most complex systems on earth. One example that comes to mind: what about innovation that isn't directly making money but enabling others to make money? For instance, what if people create an intelligent new way of doing things, but don't directly sell this new way? Maybe they open-source their powerful idea and run a successful little consultancy based on their newly-found reputation. With their help, a part of the European economy considerably improves, in more than just one way (they provde direct services and they shared their knowledge to enable many others), but without the bureaucrats taking notice - their innovation is flying completely under the EU radar.

I know it is difficult to "measure innovation", and that is why I have not yet come up with a better way to measure it. However, it seems to me that to not measure would have been better than to measure like this. A counter-question: why do we actually need this number? It's as if the numbers we have (like GDP) aren't already alchimistic and misleading enough.

# lastedited 22 Sep 2013
19 Aug 2013

I think we live in times where both governments and citizens of western democracies are searching for a new role in their relationship. My point today is only this: the court systems are still working to some degree, and they seem to me a valuable battling field in this search. Citizens should actually pay attention and support causes fought in their name.

First, what do I mean with this "search" for new roles? It is not as easy as some pundits had described, who after 1990 simply awaited the state to take a step back. After some years of perceived openness, where capitalism had supposedly "won" and everyone could relax and enjoy, I think we now rather see the state taking a stronger stand. One reason might be that governments feel threatened by globalization and the internet, creating too much openness and feedback loops for them to feel in control. Another might be rising tension due to the resource front, as energy supply gets tight and climate change restricts what we might want to do with resources we still can get our hands on. Or, more general, this is just the usual dance between governers and governed, a step forth and a step back, and we will only see what it all meant after this song ends.

Anyhow, we are beginning to discuss what "freedom" should mean, again. And when I say discuss, I mostly mean "battling out". Emotions are running high and claims are bold and strong. Governments simply create new facts of what is legally possible, while the Chinese lean back and rub their hands.

Some citizens are being taken to court, others decide to let their quarrel with the government escalate to court. In most of these government vs. someone trials, important definitions are being made, and, as far as the court is still functioning, the outcome of these battles are important for (re)defining freedom, more important than, say, a general opinion piece in The New York Times.

What is also new today is that it is dead simple to support the legal funds of people whose cases you find interesting, via the internet. I believe if you are interested in funding societal change, investing in the legal defense of someone is (to use the lingo of our time) a sound investment. The amounts I gave are by no means to brag about, but I have begun to give to such individuals, and I think I should increase this activity. Here is a short list of who I remember supporting:

 

There are heroes out there fighting for you right now and it is easy to help them.

* Actually, I only gave some monetary support to Wikileaks as an organisation, after the Collateral Damage video. Criminal investigations into Assange started afterwards and only then it became clear that most of the financial funds of Wikileaks seem to have gone towards his legal defense.

# lastedited 19 Aug 2013
09 Jul 2013

On the way to the offices I work in, the bike path layout had a flaw: To reach Science Park (coming from Amsterdam center), one had to take a little detour. In the situation shown on the picture below, you had to follow the path on the left for about 200 meters and then turn right, in order to continue on the street in the background (here is the view from above). The more direct way, right down the hill, has been an emerging bike path for the better part of a year. It is now becoming an official bike path, as the picture shows (I took it only a couple of days ago).

While it was emerging, the path was muddy, and (especially on rainy days) quite difficult to manage, both down- and upwards. I spoke to some of my colleagues and everyone wanted to take this route, but until now only the young males felt confident enough on their bikes to do it.

Why did everyone feel strongly about taking this route instead of the slightly longer one? There is an emotional cost to pay for being forced to take a longer route, even if it is only slightly longer. Does this emotional cost justify the effort of building this new bike path? I'm not sure, but I sure can say that the city of Amsterdam has a bike path management that pays attention.

A rather famous example (at least in the scientific research community on emergent human patterns) of such an expression by users that their intent differs strongly from the planners' design is this (from an article by Helbig et al from 1997):

I'm not sure if the University of Stuttgart reacted to this, though, back in 1997. I tried to find that place on satellite images, but with no success. They probably redesigned it completely. Just as our emergent bike path, this one is, admittedly, not pretty to look at. However, the response from system planners can be quite different, apparently.

 

#
22 Jun 2013

Sometimes you can tell that technological progress is blending into a known science fiction scenario. I think right about now is a time to spot several of those - this year is like those special days when you see a lot of shooting stars. 

So, there is the obvious similarity of governments watching (well, storing and monitoring) more and more of everyone's steps to the famous book "1984".

However, there is more. I recently reviewed "Manna", where a complete automation of societey begins not by replacing low-wage humans, but by algorithms taking over middle management to make fast food stores more efficient (by organising the work of the low-wage humans as efficient as it can be). Recently, reports came out how Amazon's warehouses are actually like that: 

Amazon’s software calculates the most efficient walking route to collect all the items to fill a trolley, and then simply directs the worker from one shelf space to the next via instructions on the screen of the handheld satnav device.

Most of the article is about how bad they treat workers, but their fate is not my point here. And anyway, Amazon puts their warehouses in regions which have very few economic options. Consider that locals in the article are sad because they can't work in a mine anymore. My point is that even mines had middle management. Amazon warehouses are optimally efficient with very few middle management positions (mostly, what is left to be done is to ask the system who was the least efficient and fire and replace them). 

Next to be replaced are the workers themselves. Well... that depends. I'm not sure there will be a business case within th next 50 years for robots doing simple tasks in a warehouse over humans doing it. It's not decided yet if that is going to happen. However - the business case of doing simple middle management tasks has been decided in favour of computers. They are much better in making the most of employee time. 

I'll close ba admitting that the speed and the consequences of automation in our society are discussed by people with more insight and time for this than me, so I think I'm not qualified to say whether this development is negative or positive*. It is negative in Manna, but Manna is just a story. 

 

* However, Amazon is a monopolist, so there might be a different reason not to use them that much.

# lastedited 27 Mar 2014
22 May 2013

To test if a given string is indeed a valid email address is an unsolved problem in programming. The internet is full of endless threads of which regular expression would let all valid addresses pass and forbid all invalid ones. The only agreement is that there is no agreement.

So I welcomed the inspiration of my colleague Jim (at Vokomokum) to implement something straightforward: Check everything before @ to be a valid name (consisting of letters, digits and a couple other characters). Then use some library to check if everything after @ is a mail host currently known on the internet.

I was sold. That seems like a really nice trade-off between quick implementation and effectiveness. It is mostly the host name that is hard to check, so let's simply see if it exists. Mail hosts don't disappear and reappear, so it's pretty safe to ask for the DNS system to know about them at any time. The only drawback here is that you need an internet connection to run this test.

Anyway, here is an example implementation of this in Python:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    def validate_email(address):
        ''' check email '''
        # check general form: a valid local name + @ + some host
        if not re.match('[A-Za-z0-9\-\_\.\+\$\%\#\&\*\/\=\?\{\}\|\~]+@[^@]+',
                        address): 
            raise('The email address does not '\
                  'seem to be valid.')
        # check host
        host = re.findall('[^@]+', address)[1]
        try:
            # dns.resolver throws an exception when it can't find a mail (MX)
            # host. The point at the end makes it not try to append domains
            _ = dns.resolver.query('{}.'.format(host), 'MX')
        except:
            raise('The host {} is not known in the DNS'\
                  ' system as a mail server.'\
                  ' Is it spelled correctly?'.format(host))

The dns.resolve module has to be installed, as it is not in the Standard Library. It is in the pythondns package, so you could do

pip install pythondns

 

# lastedited 09 Jul 2013
19 Mar 2013

At this time of writing, Posterous is an elegant, easy way to keep a blog. In 2011, I set up a simple blog to post articles I came across that had some angle on the interconnectedness of energy systems. I loved how I could simply send an email to <blog-name>@posterous.com and the article was posted.

However, Twitter bought Posterous as a modern way to hire their staff and then decided to shut Posterous down. All content will die end of April 2013. This is another hint that if you make something that lives online and there is a possibility you want to keep it, own the domain and be aware that you or someone you can ask for help needs to be able to curate the content over time (I started that blog just posting links, but then also wrote a summary or an opinion here or there - sometimes you don't know you'll want to keep something around for longer when you just started it).

Anyway - so I had to export the content from Posterous and import it into the MySQL database which currently underlines this website. Widely-used blogging software like Wordpress offers an importing tool, but everyone else is probably thinking how to get their Posterous data into their web publishing software.

Posterous offers an export in XML form, but it could have made it easier to deal with it. For instance,  it is hardly parsable XML, and the creation date is in a format used in emails. It took me a bit to get a simple Python script working, and I thought I'd share it here for anyone who needs to do something similar:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#!/usr/bin/python

from datetime import datetime
import re
import os
from bs4 import BeautifulSoup
from email.utils import parsedate_tz


def san(unicode_str):
    '''
    for sanitizing strings:
     - getting rid of all non-utf8 things
     - escape single quotations
    '''
    s = unicode_str.encode('utf-8', 'ignore')
    s = s.replace("\\'", "'") # first unescape escaped ones
    s = re.sub('''(['"])''', r'\\\1', s) # now we escape all
    return s


psql = open('posterous-import.sql', 'w')

xmls = [xf for xf in os.listdir('posts') if xf.endswith('.xml')]
for xf in xmls[:-2]:
    xfp = open('posts/{}'.format(xf), 'r')
    soup = BeautifulSoup(xfp.read())
    
    title = soup.find('title').text
    tt = parsedate_tz(soup.find('pubdate').text)
    tdt = datetime(tt[0], tt[1], tt[2], tt[3], tt[4], tt[5])
    sql_date = tdt.strftime('%Y-%m-%d %H-%M-%S')
    content = soup.find('content:encoded').text
    
    psql.write("INSERT INTO my_table \
                (title, content, inputdate) VALUES \
                ('{t}', '{c}', '{ed}');\n\n"\
               .format(t=san(title), c=san(content),
                ed=sql_date))

psql.close()

Well, it seemed to work for the contents of my Posterous blog, at least. I only cared about title, content and creation date of each post. 

Place the script above (download) into the folder you get from exporting your Posterous data (mine is called "space-3179959-energy-systems-ticker-0e4cc18feed3e8982ddae1ef4537b529") under the name of import.py (or whatever you like) and then call it via

python import.py

Then, you should find a file called posterous-import.sql which you can use to populate your Posterous posts into your own (MySQL) database. I used BeautifulSoup to be able to parse the XML, so you'll have to install that python library first, e.g. by

pip install beautifulsoup4

Also, you probably first would edit the INSERT statement in line 35 to match your specific table structure which you're importing to.

# lastedited 19 Mar 2013
28 Nov 2012

I recently commented on the idea to reward users if they in return offer flexibility to the management of the system they use. For instance, a congested road could pay people not to ride to work during peak hours. Or in energy management, consumers can offer that they use less energy than they originally planned or a generator offers to supply more.

The general idea is of course not new, but in the IT-congested world we live in now, it becomes possible to negotiate compensations with users in advance and identify them while they actually use the system. This new timing of such approaches makes it necessary to think about novel mechanisms (e.g. what negotation procedure makes sense, which compensation scheme will probably work for both sides).

This approach is not applicable everywhere, for good reasons. In some systems, like the road example, it is hard to track people and probably overall not a good idea. Common approaches like car-polling and toll houses (or simply taxing fuel) can also work to some extent. In other systems, like the energy example, it is not a new idea per se, but how to implement such mechanisms (such that they are usable and serve the existing infrastructure in the best way) is still under heated discussion.

Anyway, there are now two recent examples of this idea being brought to existence in novel circumstances. No idea if these experiments will be deemed a success, though. Time will tell.

 

 

# lastedited 29 Nov 2012
16 Nov 2012

A conference I'll soon be attending asked participants for the questions they find most important for A.I. I didn't submit any, but from the list of ~150 questions, they now asked us to vote for some (up to ten) that we find most important. Here are my picks (I did it really quickly):

  • Q12. DWIM - Do What I Mean problems. Can computers do what I think or intend or say?
  • Q15. Can machines and the people think together?
  • Q35. The computer processing power is increasing with the invention of new hardware devices, does this mean that the computation processing power is infinite and are humans able to achieve it?
  • Q54. Can computers exhibit the dynamics of the financial market , rather can we a) Model the financial market b) Make the computers to exhibit the same
  • Q78. Does a system become always smarter by integrating smart sub-systems or smart components together? If no, in what cases it becomes smarter, and in what cases it becomes not smarter even stupid? How to make a system smarter when integrating smart elements?
  • Q127. What is the difference between Turing test and Socratic method?

I was a bit disappointed by reading the list of questions. I could not find ten questions to agree with (but I might have missed one or two). Most of them were not only written badly, but clearly simply asked for what the researcher is currently working on (i.e. how can A.I. make use of mobile technology?/how can social networking improve our lives?) or what has been a question in AI already 50 years ago (i.e. what is thinking, really?).  

I am currently not that fanatic about being called an A.I. researcher, and this (to some extent) showed me why. The concepts are too fuzzy for me, everything goes under this A.I.  hat. Too few people care about topics that deeply influence how we will manage in the upcoming decades.

One of the themes I miss is complexity - only questions 54 and 78 really touch upon that (while I'm not sure question 54 has the intention that I hope it has). The complexity of interactions in our interrelated decision systems becomes less and less understandable and controllable. And, while we head towards a future of scarce resource, the complexity of our economic system increases as well, with no signs that the financial sector actually added any value in the way. 

A.I. researchers should decide why they do things. If you built robots, is that really what humankind needs? And if so, is it likely that we can afford a robot for everyone in 15 years? Do we need smarter decisions for future actions or rather better explanations of why systems actually fail? If you study complexity, is Facebook really where our fate is decided?

# lastedited 21 Nov 2012
23 Aug 2012

Pythons collections framework has some nice classes that can be very helpful. I just used the formidable Counter class to check if I got a probability distribution conversion right.

I wanted to generate a random integer in the range of 4 to 8, given an existing value from a uniform distribution between 0 and 1. I came up with a simple way of doing that, but wanted to do a quick test if it actually does what I thought it would do. So I used the Counter class to do a quick test:

>>> from collections import Counter
>>> c = Counter
>>> l = []
>>> for _ in range(1000000):
        l.append(int(round(\
            random.uniform(0,1)*5 + 3.5)))
>>> c.update(l)
>>> c
Counter({8: 200134, 4: 200130, 5: 199970, 7: 199970, 6: 199796})

Looks like it does the job :)

# lastedited 23 Aug 2012
09 Nov 2011

I am a big believer in subscribing to websites via RSS. This may be one of those things which sounds like 'get off my lawn' to those who think 'news' is brought by social networks. However, I stand firm: It's a core idea of a modern internet that you make a choice what your news sources are going to be and automatically fetch them.

I was using Google Reader for years. It was a great web application and it allowed sharing the best articles to other people who used Google Reader. I only had four or five friends who also shared but it was grand - it was meaningful to share the best of the noteworthy.
 
Now Google decided to kill off the sharing feature, in order to push all the Google Reader users (read: power users, who process a lot of web news) to recommend what they like on Google Plus. I don't like that - google+ is a social network and I can currently see in Facebook what content is being mixed together there. No, thanks. Other people were also unhappy, in particular Iranian users who had a very strong community in Google Reader, because Google Reader and RSS in combination are great to circumvent censorship. A colleague of mine is Iranian, and I can testify that her Google Reader experience was very social. Well, that's over.
 
Anyway, this gave me a push to try out different things*. I can proudly say that I now host my own RSS reader. I installed Tiny Tiny RSS ("tt-rss") on my hosted webspace, which is an open-source RSS processing software with a great interface. It works well (here are some helpful instructions by Jan) and it lets me export the best of the noteworthy in several ways, not just to one social network.
 
What is interesting is that hosting my own led to new old realisations: Hosting something is using resources. In this case, tt-rss is checking all of these websites I subscribed to for any updates in their RSS. I can specify how often this should happen, for instance every 15 or 30 minutes. On a slow & shared machine like I have at my webhost, this can keep the machine busy for some minutes if I follow many feeds.
 
Interesting. Google's cloud just took care of this for me.
 
The cloud makes things too easy for us sometimes. We forget what it takes to provide this level of service. I was glad to find a feature in tt-rss which lets you specify for each feed individually if it should be checked frequently or maybe only sometimes (say, once a week). You see, RSS is great for fast-moving news and for slow news. There are websites, like this one, for which the publishing frequency is measured in weeks or months. They are still important to some people and I believe that to be able to listen to less-frequent publishers is very important for the web. Who cares if I read the article 2 days after publishing, the main thing is to keep up automagically. 
 
So, I spent some time telling tt-rss what I don't need to be checked too often. Actually, it was exciting. Taking care of things myself again. Realising that the services I use come with a cost.
 
Now, I know that Google might handle my feed-requesting more efficiently, through economy of scale and whatnot. But through some specifiation of what I actually find important to be checked very frequently, I think I can come pretty close. Also, tt-rss can act as a PubSubHubbub client, which can also save some resources. Most importantly, I get a sense of what the cloud is doing for us, both in a positive an negative sense. Right now, am glad I took some matters back in to my own hands. I think there will be more to come.
 
 
 
 
* Credits should go to Google for enabling me to export of all my feeds in an easy and obvious way. I like their view on this - my data is mine to take with me if I go. I think they'd always like to keep a copy, though.
# lastedited 16 Aug 2012
29 Aug 2011

 I just spent a weekend in Paris, to attend the fourth European Scientific Python conference.

  • A very nice and talented crowd of roughly 160 people. It was very well-organised, many thanks to the team who made it happen.
  • Python has come a long way. I learned that Python detected the latest, scientifically sensational, Supernova explosion and will command an on-board camera in a robot on Mars in 2016.
  • "Mr. ipython", Fernando Perez, gave an insightful keynote talk. He showcased a new kernel-clients model in the upcoming ipython 0.1.1. Not only can several clients work on the same ipython process, a client can now be anything ipython users ever dreamed of (or didn't even know they should be).  First, Fernando showed a console emulation window written in QT. It had nice syntax highlighting, tex support, actually working multi-line editing and inline plots. Pretty neat. Then he showed a broser implementation, merged last tuesday, which they call the ipython notebook. Basically, the ipython session becomes a document, which is editable inline and at all times and can contain many content types (he easily inserted images, videos, math and javascript). The whole book can be saved as json. Fernando closes with the sad fact that less than 30 people do 80% of the work in many python scientific computing libraries (scipy, matplotlib, numpy, etc).
  • I presented my simulation framework Nicessa on a poster and decided to give a short lightning talk about this vision of independent layers which seem to evolve in this community (Nicessa can be the middle layer):
  • On the train ride back, I happened to sit next to Ralf Gommers, one of the few major SciPy contributors, by coincidence. Pretty interesting, not only because I actually started using SciPy in a research project very recently.
# lastedited 29 Aug 2011
12 Aug 2010

I am sick today so I had time to watch two talks on why the interpretation of what humans (want to) thrive for is outdated and needs to change. Both are about 50 Minutes long, but have speakers of different age and background. Both can help in understanding this new idea of how mankind is interoperating, but both are also not a final answer  - they are descriptions of what is in the beginnings and may turn out to happen in this or a similar way. They are also not the only people who think about this theme, of course.

Hans-Peter Dürr (interview is in German) is an accomplished quantum physicist and explains how from that point of view, "Wirklichkeit" is a much better term than "reality". He then talks about how he fought in World War II, worked with Edward Teller and Werner Heisenberg and met Hannah Arendt, leading him to later engagements for peace, more sustainable communities and economies.

Sternstunde Philosophie vom 25.04.2010

Jeremy Rifkin (talk is in english, with convenient subtitles), one generation younger, is someone who runs think tanks, advises the EU on energy planning and sells books. His latest theme is the "empathic civilisation", and here he is invited at Google to explain it and the challenge it faces during the coming energy crisis. He uses the occasion to tell Google what role he thinks they should be playing. He introduces the empathy-entropy paradox we need to break: As mankind has widened the circle of empathy, the energy footprint became worse and worse.

# lastedited 15 Oct 2010
13 Nov 2009

When we use a common good, we cause the most expense and anger when we all use it at the same time. An idea that is now being tested in several places is to convince a small percentage of people to deviate their usage of the good from this popular time points. It is generally agreed that in many cases,  a change in the behaviour of very few people can already relieve a system of much pressure.

The current consensus seems to be that the system operator should pay people for that. If someone actually has the flexibility to deviate, the opportunity to earn or save money might make him actually identify and use it.

I see this in electricity markets, where demand peaks can maybe avoided by convincing a couple of devices to stop operating for a while and also in traffic management: The city of Utrecht plans to pay a commuter €4 per workday if he doesn't use the highway A2. They will have to devise some solutions where they film the numberplates of passing cars, but building new streets might be much more expensive (not to mention how much trafic jams hurt the economy and the mental wellbeing of commuters).

This can be a very effective way towards more efficient usage of common goods - but there are hurdles like the increasing hunger for usage data that comes along with it. Not many people will like that and we will need to talk about this. The good news is that if the mathematicians are correct, not everyones flexibility is needed.

 

Update: The Netherlands now plan to tax highway usage for everyone via satellite by 2012 (like the Germans already do for trucks). This system is more flexible, but also harder to accept.

# lastedited 23 Nov 2009
03 Nov 2009

I have been making the optimisation of systems, based on local information, one of my specialities. Such systems are optimised in a decentralised manner (which says that control happens on local levels and decisions about that are made also locally, according to local information). As such they are resilient against the shortcomings and the failure of a central planning node. That's good for users of the system. It also sounds nice. But is it really nice, in daily perception? Let's look at two examples:

On dutch motorways, the control system sometimes imposes tempolimits (say, 70 km/h or 50 km/h) on parts of the roads in order to prevent traffic jams further down the road. So the system knows about congestion problems in the vicinity of you, and might slow you down so you do not make a traffic jam out of it.

Recently some scientists modelled the problematic public transit system in Mexico city. Though buses run according to a plan, they sometimes stay in the stations longer (e.g. when a lot of people have to get in) and so they built up irregularities over a day, which ends up in a lot of buses being in one place and none in most others (this problem even has a name: "platooning"). They concluded that buses should just leave after a short time and not let all attending passengers board. Those passengers left out should take the next bus and overall, in the long run, everyone is better off because the whole system will run smoothly.

All this is true, but from the local viewpoint of the user of such a system, it feels wrong. I cursed when I rode on the dutch motorways and had to slow down for no reason that I could immediately apprehend. And it will definitely suck when a bus driver just leaves with you still standing right in front of the bus stop. We would hate those bus drivers.

I can tell from my own experience in understanding such systems (while researching them) that it would be hard to make everyone understand the situation. On the one hand, it's based on a lot of information and that information is decentralised - it is not all in the same place as a local observer. On the other hand, our brains are also bad at processing such abstract stuff like the platooning problem.

What to do? Would humans be happier with problematic systems, simply because they would feel more in control? Or can we evolve a system thinking in our culture that appreciates such complex, decentralised solutions? Maybe it would help to put out as much information as possible about how these systems work so that everyone can look it up herself. Maybe we'll need to go the extra mile and also visualise them really well. We should have graphically appealing real-time overviews of traffic situations (already in the making), public transports, elevators, electricity grid congestion and so on - for everyone to see and to discuss.

But I also think that in the end, not every optimisation procedure can actually be accepted - it needs at least to be understandable to stand a chance.

# lastedited 16 Jan 2010
25 Sep 2009

Yesterday I was lucky enough to take part in the Seminar of Collective Decision-Making at the Hanse-Wissenschaftskolleg Delmenhorst. Interesting talks about current research, all of them involved socio-economic experiments with human subjects who got real money based on their behaviour. I'll try to briefly summarize all three talks here:


Manfred Milinski

A look at cooperation towards a common good and a stab at representational democracies. Subjects played the "climate game": Everyone has to invest in a common good, or all are at risk to lose all they have. Players start with 40 Euro, and can for 10 rounds invest 0, 2 or 4 Euros. If after 10 rounds less than 120 Euros have been invested in the common good, they all are at risk (here they tried 10%, 50% and 90%) to lose all they have left. I was late to this talk, but I heard that often the players didn't make it to invest 120 Euros, even in the 90% risk case (all of the rules were obvious to everyone).
In another scenario, 18 players formed 6 "countries" a 3 players. Each country elected a representative, based on the strategy (s)he proposed (via chat). This setting was played three times, so that countries held several elections. It turned out that this worked even worse. Representatives who worked towards the common good left their countries with less Euros left, so they were not re-elected. In the next round, representatives all paid too little towards the common good. Sad.
Afterwards, there was a lively discussion about the transferability of this experiment to the real world (someone called the discussion a "snake pit").


Bernhard Kittel
"Coordination and communication in multiparty elections with costly voting"

This research started with modeling a problem in voting theory, but ended with a look under the hood of collective decision making in human groups. In some elections with three candidates, the winner is not preferred by most voters, but the ones who oppose him just split their votes on the two other candidates (think Bush vs Gore and Nader in 2000). Those voters should have been more strategic. In addition, lots of voters don't show up.
In the experiment, voters (the subjects) were assigned payoffs for each candidate, modeling that they all had different preferences. All voters wanted either candidate A or B while they all despised candidate C. In addition, there were costs assigned to actually voting, so that some voters would decide to abstain.
Voters knew what the preference distribution was. The researchers added communication among the voters via a chat before they actually voted. This enabled the voters to coordinate their behaviours in a way that candidate C got elected less often and the voters maximised their individual payoffs.
What then is really interesting is that the chat logs contain the "black box of group decisions". The researchers plan to analyse them in order to find out more about the coordination process of humans. Results are to be expected in a couple of months, but they can already say that
  * the voters who organised most of the coordination got less personal payoff in the end
  * ca. 50% of the people simply stick to their candidate in all rounds, no matter what is being discussed
  * communication among all voters is needed for an optimal outcome
Very interesting. I think that if these chat logs are really a "black box of group decisions", then this data set should be opened for other researchers.


Judith Avrahami / Yaakov Kareev
"Do the weak stand a chance?"

These experiments concern situations were one player clearly has less resources than his opponent. What happens in terms of strategies?
In the "pebbles game", each player has 8 boxes. Player A and player B have 12 and 24 pebbles, respectively, which they can distribute among their 8 boxes. Then, one box of each player gets randomly chosen and who had more pebbles in his box wins. Player A, clearly being weaker, should leave some boxes empty so that he at least has some boxes with a good chance of winning. Expecting this, player B should also make his distribution of pebbles uneven (rather than putting 3 in each).
In essence, inequality in player strengths introduces variability into the strategy space on both sides.
The researchers than got further into the role of the evaluator: what difference does it make that only one box gets evaluated, rather than all? The hypothesis would be that adaptive agents are pushed into trying harder when they know that only some of their work will be evaluated but don't know which in advance. They ran an experiment in which subjects solved addition problems on 6 pages. When the knew that they would only be evaluated according to one page, performance rose.
This managing style got invented to save lazy evaluators time, but it might actually raise performance. I wonder then, if subjects who knew that they were weak in adding numbers also left some pages out to concentrate on the others.

# lastedited 25 Sep 2009
08 Aug 2009

When I run computer simulations, I have to have a solution for the same set of technical tasks each time:

  • How to combine variable settings in experiments
  • How to store log files nicely
  • How to plot nice graphs from those log files
  • How to run these expensive computations on remote servers (e.g. university servers)

Now I have one solution for all of this and I am very happy with it:


The above bundle of tasks is universal to a lot of scientists that need to to run computational experiments. I myself will run into this over and over again.

It is a natural reflex of a software developer to build a good tool for this over time and this is what I did. While working on several projects during my Masters, the latest of which is my thesis, I developed scripts for all of these tasks and bundled them together so that you could now call it an application: Combex (Combinatorial Experimentor).

It has turned out to be very useful to me lately and I would like to share this tool with anyone who is interested (this is its home). There are a lot of things that could be even nicer (I already maintain a ticket list), so I welcome contributions.

Note that Combex lets you program whatever you want in whatever language you want, all it wants is that you write log files.

I will let the (sparse) documentation speak for itself and just throw in another screenshot which shows how it all nicely comes together: A (dummy) experiment gets chopped into several tasks and those are shipped to remote servers. I can nicely check if they are done. If so, I have Combex get the data and generate nice plots.



P.S. To talk about our process and find general, reproducible solutions seems to be a general trend in science, see e.g. myexperiment.org.


P.P.S. I am unaware of any other software that offers this task bundle, even if its commercial. Anyone knows software like that? I only found quite specialised approaches, but what I like about Combex is that it doesn't care what the hell your code is doing as long as you configure variables and log data. It is more of a very simple workflow with nice tools along the way. In principle, a lot of other software (say, statistical processing) could be hooked into this as needed.


P.P.S. I am still open to be convinced by a better name for this.

P.P.P.S. It has now changed name (it also evolved a lot) and lives here.

# lastedited 02 Feb 2017
22 Jun 2009

I recently accepted a new job. From October 2009 on, I'll be a junior researcher at the CWI (Centrum Wiskunde en Informatica), a national research center here in Amsterdam. I'm very happy to be working with Prof. Han La Poutre.

I will actually be a Phd student but do research in a predefined context - nowadays, a lot of Phd positions in the exact sciences are a mixture of research work and research exploration. An exciting part about this project is that it is about a very real challenge and there are many stakeholders involved whose views on the problem are important. I will do fundamental research on dynamic and adaptive multi-agent systems, but I will also be busy wearing the hats of different stakeholders and try to bring their view into the model.


[image via http://www.usgbc-centraltexas.org]

The project I'll be working on is called IDeaNed (Intelligent en Decentraal Management van Netwerken en Data, only dutch description so far). It deals with one of the big challenges we face in the next years - a redesign of our energy distribution networks.

As we begin to exit the age of fossil energy, energy will soon become a problem for us, not a cheap and abundant catalysator for progress. Now, as an AI researcher, I can't help out where it is most important - say, actually invent an effective renewable energy source or useful, longlasting batteries. But what we need for sure is a new energy infrastructure. There is already a term for this - the "Smart Grid"*.

Our existing energy networks are of an old, centralised design which is not suitable for the way we need to look at interconnected things. As the report "Grid 2.0" (PDF) puts it:

"It is hard to make the link between flicking a switch and the distant power station that made it possible to turn the light on.
(...)
Partly as a result of this lack of a feedback mechanism, and partly because of technological constraints, Grid 1.0 is surprisingly inefficient. Only around 40 per cent of primary energy input (coal or gas) used in power stations is converted into usable electricity, the rest is wasted heat. A further nine per cent is lost as the power moves through the transmission and distribution system. Then a further third is lost in our homes and offices because they are poorly insulated, not designed with energy in mind, and inhabited by people who do not see themselves as players in the energy game."

We need a network that is decentralised (can work with input from several local sources, not only big central plants, can communicate locally about sharing loads) and avoids efficieny problems in peak times. Such a net needs real-time price mechanisms and needs to accomodate a lot of players: customers want to use effectively, producers want to sell efficiently, governments want to distribute evenly **. Meanwhile, there is a real, non-abstract component: the network. Voltage capacities at different parts of different sub-networks dictate what is possible and what is expensive to do. A lot of these things change all the time. This can get very complex very fast.

What we'll be doing is modeling Multi-Agent systems in order to learn what good price mechanisms are and what good automisationable strategies of the local players could be. We will get input from another Phd student from Eindhoven supplying technical findings about the infrastructure and we'll work with companies in the dutch energy market (big players and consultancy agencies that  develop energy network simulation software). This highly integrated approach of designing scientific projects is a unique dutch approach and I am quite excited about it.

This topic is not only being picked up in the Netherlands. Big companies like GE or IBM are already promoting their competence in this topic which isn't even entirely understood yet.

They smell money in creating a market better targeted at our actual energy usage and in savings due to efficiency. This might be one of the rare cases where what they want and what society needs have a significant overlap. For instance, I recently learned that energy efficiency on the last mile is multiple times more effective than energy efficiency at creation time (due to all the energy losses by conversion or distance that already accumulate until the last mile). And it is easier to do.


* There might even emerge a bubble with all the money being thrown at this topic right now by governments, but I hope that in 4.5 years, when I finish this project, things will have come to terms, the crooks took their money and left or were thrown out and we know where to go.

** Especially when there is not enough. In a post-fossil world there may be "bad weeks" with few energy available. Or, in more grim scenarios, we'll have few energy at all times and almost none in "bad weeks". Making sure distribution can be fair is crucial and not as easy as it first sounds.

 

# lastedited 22 Jun 2009
03 Jun 2009

I am reading reddit.com for the interesting links, but also for interesting conversations in the comments (I still have to learn to skim over the trolls, though). A special kind of conversations came up recently. People would claim that they are interesting or different in some way and invite peoples questions.

I remember that it started with a guy saying he is hetero, but works at a gay sex phone line to support his studies. If anyone wanted to know anything about it? People wanted. Some girls said she has epilepsy. Any questions? Sure.

Then, someone had the idea to make an own subreddit for these kinds of conversations (a reddit-style forum). It's called I am a .... Currently, it's pretty lively. Some people can be asked about being french or from Afghanistan, others have strange jobs like Lobbyists or are illegal Immigrants.

#
27 Apr 2009

Every scientist needs to have a collection of papers (s)he reads and might use for reference. For the last research projects, I used a desktop program called JabRef that was decent. Now, I switched to something really promising.

Mendeley is a startup that wants to do with research papers what last.fm does with music: Make it easy and fun to find some that you don't know yet but would probably like. This goal is really, really mouth-watering, since the amount of papers out there seems to grow exponentially. Today, there is no other way of knowing if someone else already did what you want to do by following references from papers and maybe keyword search on Google Scholar. It's a growing concern for researchers.

They want scientists to have a profile on their site and fill a giant pool of data by citing their research interests, uploading and tagging a lot of papers and maybe build social connections. Then, recommendation becomes possible.

Of course, they'd have the hen-and-egg problem every social community project has. Value can only be created when people are there in big numbers, and people only come when value is already there. This is why the Mendeley people decided to start with solving some other problems scientists have, so that users have a reason to come:

  • They built a really slick desktop client to organize papers. It runs on every OS, does full-text search and extracts meta data from PDF automatically if you want. I think it's already better than JabRef.
  • The client also syncs automatically with your web portfolio. You can have your library with you wherever you want.
  • You can open a research project and share papers only within that research group.
  • You can always import and export your library (for instance to Bibtex).

They said recently that users have uploaded one million papers already. That's a lot of potential for data mining right there. And did I mention that they have a Last.fm investor on their board?

#
07 Apr 2009

I just watched the first part of "The Trap: What Happened to Our Dream of Freedom?" by Adam Curtis. He claims that our basic notion of human behaviour has been crooked ever since the cold war, leading to inhumane psychology and selfish politics.

His main starting point is that in the cold war, the two big military forces faced the double contingency problem of how the other party might think of what they were thinking of them themselves (and so on) - all in the realm of nuclear destruction. The USA threw a lot of scientists at the problem and they invented Game Theory. This concept, in which people are regarded as only selfish and numbers can be put on behaviour, got picked up by psychologists, economists and politicians, slowly taking over the way society looks at human nature.

I don't disagree fundamentally with the basic historical notion here: In the 50s and 60s, Game Theory still had a rather simplistic view on human behaviour, portraying humans as only self-interested. Later, western societies went through a lot of change but left the discussion what constitutes freedom in the realm of rationalism. I hope we're making some progress to develop this idea of what makes us happy and free, but this discussion has been stuck for some years now. In the meantime, societies changed a lot, sometimes built on a much too strong view of humans as being rational. It makes sense to discuss what went wrong because of this.
I agree with that, but I think, Curtis overshoots with this movie. Here are some points of criticism:

No History context
For starters, Curtis bends history so that it fits his story. Game Theory wasn't invented during the cold war, but already in the 1930s. The view that humans are rational beings is also much older.
More on context: Curtis recognizes that western societies needed to change in the 50s, 60s and 70s. Psychology until then was really, really inhumane, Families were often little dictatorships and government bureaucracies (in his movie the British) desperately needed reform. Curtis never addresses the view on human behaviour that was in place until then (I think it was close to the view that you can make people do whatever you tell them to).

But any new concept should be explained in the light of the concepts it was made to replace. What was the general idea that the rational model replaced? Were there other good solutions on the table at the time than to try to view people as rational? Maybe it wasn't only scientists who placed this concept in the mind of society. Maybe it was time to look at people that way and maybe this was already much better than the view before.

Crazy Scientists
Curtis portaits proclaimers of Game Theory ideas (in which people are only selfish) as crazy and old. That is necessary, because today, everyone younger than 70 who works Game Theory wouldn't look at things in this one-headed way. For instance, when using the Prisoners Dilemma, the only modern models I have seen ask the question: In what settings does cooperation emerge? (Axelrod first made this new notion public 29 years ago). It's always some crazy scientists misleading everyone else. For example: Buchanan, an economist, is guilty for Thatchers politics because she invited him to talk once.

I understand that you can't show all context when you shoot a documentary, but Curtis is destined to blame a dozen people for all that went wrong regardless of what was there before, and that smells of conspiracy nut.

Causality
He claims causal links that I can't follow. For instance, here is the gist of 10 minutes of the movie: The psychologist Laing critized old-school Psychology in America  (somehow, hew was of course inspired by Game Theory in the beginning of his work). As a consequence, American Psychology turned to using automated, oversimplistic questionaires and tested a lot of people for mental problems. Then, it turned out that every second American has some history of mental problems. As a consequence, Americans were oversensitive for the idea of what "normal" is and since then ask their Analysts to turn them into this new strange notion of normality. That is one hell of a causal chain, proving that Americans running to Analysts is also a consequence of Game Theory.

Here is my theory: Society is a complicated organism. Sadly, things take time. Good ideas take decades to develop into mature concepts that everyone has understood.  And it seems to be a pattern that they  cause a lot of damage when people carry early versions of them into the real world. See Game Theory or also the influence on early physics ideas on economics. That doesn't mean you can't make the theories better, like it happened to Physics and I believe, also Game Theory. Of course, if certain people already are happy with the dangerously simple ideas and cause damage, we sometimes have to wait until they are somehow out of power. They say that old ideas are only gone when all of their proponents are dead.

But enough of that - back to Curtis. Sometimes, I got the feeling that he really agitates against using numbers for anything that could describe human behaviour. Fine, that is something to discuss, but then he made the wrong movie.

P.S. This critique makes some of the points I made more clearly.

# lastedited 23 Aug 2012
17 Mar 2009

I am currently reading The Origin Of Wealth - a book which tries to explain in what deep explanatory troubles classical equilibrium economics have gotten into during the last two decades.

One of the main messages is that economic systems are no closed systems that could theoretically reach an equilibrium - they are highly dynamic and interactions are so complex that they will only -suddenly- reach an equilibrium when all involved players are dead.

This simple drawing I made should resemble some of the most basic physical classification of systems and where economic systems belong. The author says that classical economics would want to place an economy in the right branch and therefore never have a model close to reality.

After economics borrowed the notion of an equilibrium from physics (roughly 100 years ago), physics moved on and discovered entropy and the second law of thermodynamics. Chaos and Complexity have now been discussed fundamentally by almost all basic sciences, but seldom in economics.

Later on, the book discusses that in an open system like an economy, the creation of (temporary) order is what we call "wealth creation" and that this happens by evolution-like processes (the best system/solution/product replaces others):

I am interested to see how the author defines a "complex adaptive system" (which is his own term) later on...

 

# lastedited 12 Aug 2010
17 Mar 2009

My last post concerning cooperation  was not about any research that I would personally do, but highlighted the best and most useful results concerning the old question why there is cooperation that I have come across. I am studying Artificial Intelligence (AI). Today, I want to put on the shoes of an AI researcher trying to tell other researchers why research in cooperation is a good idea.

Cooperation as an AI topic
A natural question is: Why should someone in AI spent precious research money on cooperation? After all, to explain why there is cooperation is generally a domain of the sociologists and biologists, maybe economists. AI is based on Computer Science and while it surely should try out inspirations from other disciplines, its main purpose is still to build things that work in a new way (initial goals of building something entirely new have been refined). More and more (due to a lot of fruitless approaches and now also due to the economy), I hear the question why any approach would be useful to pay for. What can the new thing do or show that lets people do things better (for instance more efficient) than before?

I think that cooperation is a fruitful theme in this context and want to explain why.

Autonomous agents
An important goal for the future are intelligent agents that do tasks on behalf of us. We don't want to tell them too exactly what to do (it's work and we might give bad instructions) and want them to interact with each other in the world. This is why they need to act autonomous. They also need to be reactive in an unpredictable environment which for them mostly consists of the actions of other agents.
My simple point is that the notion of cooperation is a good tool to model this. First, actions of other agents can mean a lot of things to me. But if I was pressed to put it into really simple terms, I could label those actions as being good (cooperative) or bad (defective) for me. It's a simplification of the world, sure. But we have to start somehow*. With a modeling tool like the Prisoners Dilemma I can already model that agents are autonomous and depend on what other agents are doing. That's already some modeling effort covered, even in a way that researchers agreed upon long ago to be a standard method. I can still play around with some settings though, like the utility values or the number of involved agents per interaction.


Efficient Systems
When I say said I model cooperation, I mostly get looks that imply I'm being labeled a "Hippie". But in reality, cooperation means efficiency. Cooperation makes sense if the outcome of it is superadditive. This means that the utility produced together is more than just the sum of the utilities the agents would have produced alone **. It is also safe to assume that when one agent defects the other, he takes home a lot more utility, but the overall utility is still less than if both had cooperated. So for the system performance, it would be great if agents decided to cooperate often. This should be sold as a hard fact more often: Cooperation makes systems of autonomous agents efficient. It makes sense to do research with this goal in mind.
One thing about complexity and superadditivity: Not all interactions are superadditive, of course. But I think that when the behaviours of agents become more advanced and complex, their interactions in multiagent systems will be superadditive more often (since superadditivy often arises when noone is an expert for everything).


Multiagent architecture - Cooperation built-in?
Researchers come up with design guidelines for multiagent systems quite often. That cooperation is an essential design guideline for decentral systems with autonomous agents is sometimes already part of this. Take for instance the AMAS architecture. There is a subheadline called "Self-organisation by co-operation". They assume that each agent always tries coopreation and thus the system becomes efficient. I think that is a little simple. Take humans - for us cooperation is natural and feels right. But we don't have to cooperate. In fact, we often don't. If there are agents in the system that defect, then let that happen. Work around them, if you can. Defect them back if you deal with them or maybe see if they change behaviour.
Multiagent systems can - like nature - be heterogeneous and then it doesn't make sense to assume some behaviour for all agents.

Outlook
Cooperation is one way to start modeling complex interactions. There are simple models to use that everybody understand and we can agree that cooperation makes systems efficient. However, it is important to note how we mostly talk about the question if an agent wants to cooperate with another. Much more complicated is the how later on.
How do two agents cooperate once they both try? Can we make any general models for this at all? There is so much context involved and so much circular dependencies, so much source for uncertainty. I hope we can find a simple model for this that can be as usable as the Prisoners Dilemma.


* And we have to talk about it. To communicate about science is really important and to use simple and accepted models helps big time in this.
** You can nicely model such a situation with the Prisoners Dilemma.

#
10 Mar 2009

I got my new MacBook some days ago and am very happy. Especially having a built-in webcam feels very much like finally being in 2009. Visiting my friend Marcel in Osnabrück, I tried out Photobooth, a nice app to make pics with the webcam. It's great fun, not only for kids:

Here are our best shots :)

#
08 Mar 2009

I have not found time to develop my approach to a next-generation CAPTCHA any further (here is my original outline of the idea from last year). I still think that a collaborative process of creating and solving interesting and/or creative riddles is a really good idea, so I want to at least put this idea out there. Here is the presentation I developed for it together with Kathrin:

Also, I had started a website for this project before I ran out of time. Here are some blog posts.

 

Update 10.03.09: The editors at slideshare.net featured this presentation on their start page. Thanks! Btw, this list of featured presentations is a great place to shop around for interesting  and well-digestible concepts.

# lastedited 10 Mar 2009
27 Feb 2009

Here is a result from the group project.

To summarize shortly, the job description was to take an emergent system (where the exact outcome can't be controlled for in advance) and inject some agents that prevent some behaviour from happening. This could for instance be an effect you as a system designer don't want while you still want to allow the system to freely find other solutions.

In this little animation you can see the first proof of concept in a Particle Swarm Optimization. In the middle is a local optimum (darker background is optimal) and the grey agents try to coordinate in luring normal (red) agents to the hard-to-find global optimum in the upper left corner. Everything still happens decentralised.

It was fun to put it together in roughly two days, but now the tricky parts would begin:

  • This example with only two control agents worked well, but it all depends on the settings: What are good parameters to adjust the control agents (e.g. the ratio of control agents vs normal agents, what is k in the k-nearest neighbour approach, why do sometimes equilibria exist that seem hard to explain right now?)
  • In what direction should research in controlling emergence generally go? Martin Middendorfs initial approach [1] is interesting, but also just playing with the idea.

Update: Jeff Vail has some thoughts on "Guided Emergence" in Biology, the War on Terrorism and the Military.

[1] D. Merkle, M. Middendorf, A. Scheidler. Swarm Controlled Emergence - Designing an Anti-Clustering Ant System. Proc. IEEE Swarm Intelligence Symposium, 8 pp

# lastedited 31 Aug 2009
27 Feb 2009

I'll do a three-piece blog series about cooperation - well, at least about some basic terms I've been thinking about, in order to clarify them for myself. This first piece will deal with cooperation as a research objective. Later, I will argue why I think cooperation is a formidable objective for Artificial Intelligence research and why it's a good idea to study network structures along with it.

Basic Definition
To cooperate basically means to do something together. The research I deal with is especially interested in situations where altruism is needed for cooperation, i.e. you have some initial costs if you cooperate. Then, the question the cooperator can ask himself is "Will I be repaid?".

The Dilemma
In many situations in life, you will be repaid. But if you want to be rational about it, there is always some uncertainty. Moreover, cooperation may or may not have some overall benefit for you after you've been repaid, but if you cooperate and get screwed over (not cooperating is called "defecting"), you may lose your whole investment.  So there is unbalanced risk, favoring defection. The uncertainty about this is higher if you know less about the situation you're in.
The dilemma is as follows: If you don't plan ahead (maybe due to uncertainty), defecting is always the best option. If at all, cooperation only repays in the long run and if you don't get defected too often.

We all know that cooperation happens every day, even with big uncertainty, in humans as well as in almost every other life form imaginable. It has been proven that humans are even hardwired to cooperate, for instance via the oxytocin hormone (see for instance this book by Joachim Bauer) *. The question is how it came to be.

The Prisoner Dilemma
A really nice way of putting this dilemma into numbers is the Prisoners Dilemma. I'm not going to reiterate the basic "prisoner" setup (which was just a story to make the dilemma clear). If you look at the payoff matrix below for a while, you'll notice how the numbers express the dilemma: to defect will not hurt, but might payoff big time - cooperating will not yield that much, but may hurt dramatically.


Why is it nice to have this dilemma formalized so simple? Now you can setup different worlds, simulate them or prove mathematically that a certain setup is good for cooperation or not. It's a great tool. I'll talk more about this in the next post on Cooperation in AI research.

To finish this, I want to highlight some approaches to explain why systems have cooperation. It's amazing how long it takes to come to reasonable theories for this:


Survival of the fittest (Spencer, 1864)
The survival of the fittest is a dangerously extreme notion of individual success. This notion would only opt for the short-term notion of the cooperation dilemma (saying that defection is your best option, always), and is therefore stupid. It has been attached to Darwins ideas of evolution for a long time and to explain cooperation it is often changed to somewhat more meaningful concepts:
 

  • Kinship selection - A very popular concept among scientists is that you're more likely to cooperate the more related you are to your opponent genetically. This notion is carried by the belief that genes have been favoured in evolution that make their carrier do beneficial stuff for similar copies of them (i.e. for related carriers), where you are related 1/2 to each parent, 1/4 to a sibling and so on. They even try to get these numbers to show up in psychological results (it gets problematic right there).
  • Group Selection - For cooperation outside of kinship, people just bend the survival-of-the-fittest metaphor to several levels and say that there is selection pressure on whole groups and if only its members cooperate, the group is stronger than other groups. You can find proponents for this from Kropotkin to Hitler.

Both of these subtheories make some sense in specific situations. If I ever happen to die in a fight to the death, then certainly the survival-of-the-fittest approach would be suited best to explain why I didn't survive. But none of these approaches is in any way general. There is certainly cooperation without kinship (called "reciprocal altruism"), and when everyone is connected to everyone, how do you define a group?

Systemic approach
There is a need for approaches that approach conditions for cooperation more systematically. For instance, one very important property of a cooperative system is that altruistic acts are superadditive (meaning that such an act generates more utility than if the two participants had just acted alone - the utility can still be shared unequally). When you look at cooperation in systems, you are more interested in the behaviours of agents than their interiors. You care more for the dynamics of all interaction patterns than if A would beat B in a duel. This is an approach we need while we realize how complex all our networks are.

I found a promising model by Fletcher and Doebeli (2006) [1], who connect Hamilton's rule with results from D.C. Queller (which are from 1985!!). While Hamilton's rule explains in simple terms that cooperation works well if the cooperators are related on the genotype, the rule can be generalized to the relation among phentotypes. In other words, systems that have some properties that support cooperation, will have cooperation. It matters that altruists benefit each other. I quote from Fletcher and Doebelis conclusion:
 
"What this rule requires is that those carrying the altruistic genotype receive direct benefits from the phenotype (behaviors) of others (adjusted by any nonadditive effects) that on average exceed the direct costs of their own behaviors. Kinship interactions or conditional iterated behaviors are merely two of many possible ways of satisfying this fundamental condition for altruism to evolve."

I hope I have motivated how research generally deals with cooperation, or at least what tools I think are appropriate. The next post in this short series places cooperation in the context of AI research, which mostly should answer the question why anyone would want to build cooperative systems (until now, I was talking about explaining them).


[1] J.A. Fletcher & M. Doebeli, Unifying the theories of inclusive fitness and reciprocal altruism, The American Naturalist, 2006

* Of course, too few cooperation also happens. The Tragedy Of The Commons models situations in which agents use up a depletable common good (for instance a water well), thinking only in short terms.

# lastedited 28 Feb 2009
26 Feb 2009

During the talk of Giovanna Di Marzo Serugendo I learned a new term: Stigmergy.

It stems from biology (from 1959, actually) and describes self-organizing systems in which the next actions are determined by the state of the environment. This way, agents don't even need to have a memory or communicate. They leave their environment in some state, and whoever wants to do further work there knows how to go on.

There are to ways to do this:

  1. Leave a marker, for example the pheromone traces of ants carrying information on where to go.
  2. Leave the work in a state. The wasps, for example, build hives in hexagons. Whatever the actual state of the hive is, it is obviously defined by some simple rules which every wasp knows where to build the next hexagon.

Systems like this have a property that I think I remember from autopoietic systems: The single actors are not needed for the system to survive (since their memory or their communication holds no existential information).

I noticed in a discussion with Gabriel yesterday that this is a vision of professional programmer teams:

In programming, the number of hidden assumtions rises all the time and code that isn't reviewed to be readable most likely does what it should, but it becomes more and more unclear for other humans what the code intends to do as it grows. The current mantra is top let the code be so clean, documented and self-describing that any developer can quit the team without taking the whole project down with him. This 'clean-code' mentality requires a lot of discipline by developers and adherence to standards.

These standards develop during discussions in the programming community. The tendency to demanding stable systems makes the individual programmer less important. And they help making this possible by themselves. Fascinating.

#
25 Feb 2009

Anders Johansson gave a great talk here at Decoi 2009 about the work Dirk Helbig directs (he is currently in Zürich - here is a 3Sat video of a german interview) concerning the simulation of pedestrian crowds. This research has over some years evolved to a rich set of tools and he was able to entertain us with nice 3D-videos of their simulations. I will shortly highlight the main topics he addressed.



The social force model

Their model is based on a swarm algorithm: Agents (the pedestrians) follow their own goal, but are influenced by others (others can be other agents or walls). They want to be close to others, but not too close (this is called "repulsion"). Then, they added to this proven local algorithm what they found useful. For instance, agents have individual traits like acceleration (how fast they can be and how fast they reach this best tempo).
You can also describe the external forces influencing an agent as physical (strong, but short-lived) and social (not so strong, but affect the agent longer).
By illustrating each step in the model building with a small 3D video of people approaching each other, we really could watch the model becoming quite realistic.

Real-world data
A lot of the work has been done in extracting real-worlf behavior from videos. They filmed people moving through metro stations and shopping malls. It is a quite demanding task to extract all individual pathways from the video, but they have a good algorithm now to do it. Then, they can replace one agent in the extracted data with one simulated agent (who works according to their model) and find out how good the model predicts what a real human does. Do this a lot for many combinations of people and adjust your parameters a lot, and you can make your model better.
It should be noted that of course the video can't tell which intentions humans had - if someone remembered he forgot something, he'll turn around. Also, the real humans had a high variation in their behaviour.

Evolution
Some work has been done to evolve an agent strategy via a 'blind' evolution algorithm that just tried to evolve a model that predicts human behavior very well (you notive that this is a bidy of scientific work spanning years).

Macro Level
Now, when you look at groups of pedestrians from a macro level, you can describe crowds in several ways:

  • when groups of people have contradicting targets, they tend to form lanes (at least this happens in Europe). If the streams of people hit each other diagonally, the form stripes (this looks fascinating).
  • When the space which pedestrians move through gets more crowded, the mass of people  moves through different stages: First it tends to progress in Stop-And-Go Movements. When it gets even more crowded, turbulences occur in which most people just get pushed around slowly in random directions. Interestingly, some simple models had predicted that crowds at some point will just stop moving, but that actually never happens.


Real stampedes
The research group also studied real mass accidents that happen in night clubs or football stadiums around the world. In fact, Anders just arrived from Saudi-Arabia, where Scientists study the mass accidents that regularly happen during the Hajj in Mekka. We got introduced to the terrain and special problems of this occasion (for instance every single of this 3 million people needs to do the exact same thing at the exact same place). We saw some stunning videos from 2006, when they started to film the crowds (luckily, we didn't saw the big accident of 2006 itself). for the next Hajj, the organisators took several measures like defining routs in which to go (slowing down traffic and leaving areas free for emergency support), building important bottlenecks over several stories and simply registering a considerable amount of visitors to be able to spread them over the week.

Accident prediction
A nice feature of a model would now be to predict an accident from such a video. They found that a measure called "gas-kinetic" works best for them. This indicates the crowd density and the variation in velocity. When both are high, an accident becomes more likely. However, they are hard to predict. In almost well-known examples, something innocent triggered the mass panic (like a fight between two women in a Chicago night club or the rumor about a suicide bomber in Karbala, Iran).

#
24 Feb 2009

I am currently attending the DECOI 2009 workshop (Third International School of Design of Collective Intelligence) in Leiden. It has very interesting speakers (I hope I manage to summarize some things later) and attending people.

My work group will play around with the idea of controlling emergence (meaning that you might be able to change the mix of your system design on-the-fly in order to prevent certain behaviours of the system that you do not want to happen).

My first positive impression is the location: the Lorentz Center in Leiden is an institution solely in place to let scientists organize efficient and rewarding meetings like this. It's government-funded and professionally run. Attendants get keys to offices (we're 5 in one office) and the coffee machines grind the beans freshly while you wait. There is free Wi-Fi and food.

There is no way to measure directly if this government money is well spent. We don't publish anything here. But any scientist will tell you that exchange of ideas among scientists is one of the most important tasks on science. Organizing such meetings, though, is hard work and keeps scientists from doing actual science. Kudos to the dutch government for addressing this problem. Good policy.

#
02 Feb 2009

[from here]

  1. A man ("TN") suffers two strokes and says he's blind. Standard tests confirm it.
  2. He still reacts to someone smiling at him. Researchers are thrilled to find out that his emotional processing center -the amygdala- seems to receive information.
  3. Researchers build him an obstacle parcours to walk through. He is able to do it. He sees, but only subconscious parts of his brain still process the visual information.
  4. They are excited because now they will be able to study how far our subconscious processing goes, in isolation. At least for vision, that is.

 

#
08 Jan 2009

I am having a rough semester, group-wise. I think that group work at universities is still pretty immature.

pic by Jeff Werner

Here at the Vrije Universiteit (like in many universities these days), group work is an important concept. To put students in groups saves the lecturer the hassle of evaluating 30 projects. Instead, (s)he evaluates 10 projects. Then, projects can be more ambiguous. Of course, group work also prepares students for the complex work life, where nowadays collaboration is an important ... and so on.

My point today is that group work also introduces new problems, problems that are not properly dealt with. Most of the times, a group is assembled and you have to work together with some guys you never met before. Yes, this is where the soft skills get developed. But what if one or more group members are clueless and/or lazy? The last thing is often worse and it happened to me in almost all courses this semester. I worked really hard, while others profited and got the mediocre grades that I almost single-handedly achieved. Meanwhile, I couldn't focus on other stuff like exams and also got mediocre grades there. I am pissed, mostly because people can behave like dicks right in front of me and get away with it.

Of course, this also happens in the real world. In fact, I know that it happens all the time. But in the real world, there is often something you can do. You can leave and add your work force to a better group. You can fire someone. People can get different payments because their contribution to the project is different. In the real world, there are potential drawbacks for slackers. Currently, in most courses at universities, there is none. You have to stay in your group. Everyone gets the same grade.

What I am saying is: if you create group work, fine. But if you let the work of individuals be invisible, you create incentives to cheat. If you grade everyone the same, you show that underperforming pays off. If you don't allow people to regroup, you are a Stalinist.

A group needs to be able to react to conflicts in a way that matters.

There are so many tools out there that make online collaboration trackeable. Tell students to use them and monitor them in case of conflicts. Make conflicts visible. Introduce a sensible rating among group members and act if there are disparities. Make it possible for members to leave a group. There is so much to do that would raise the quality of work and the motivation of students that are not trying to get through on the back of others.

 

Update 12.01.09: In one of the project, one that still goes on, I decided to talk to the supervisors. One normally doesn't want to do this, because it feels like telling on your colleagues. But by this time, I had become so angry *, that I didn't consider my group members colleagues anymore. I have to say that when the supervisors are really monitoring group work (in this case by group meetings and presentations), then they can and sometimes will react properly. In this case, they had also suspected what I was describing. After they questioned the rest of my group about their subject knowledge, they decided to split the group. Maybe that is better for everyone. The supervisors really took time to resolve this and had indeed prior knowledge, albeit coarse, about individual work. If they do good work like this, I do think that supervisors should take a minute at the beginning of a course to mention that they will also keep an eye on individual performance (and there might very well be differing grades) and will react to problems in groups (and not just the "Hippie-way"). This helps the incentive.

By the way, this was in a group of three (where they normally preferred a size of two). Group size is also very important. A group shouldn't be too big. I had one course this semester in a group of five and that just spells disaster in this setting.

* I was angry when I wrote this rant, in fact - so I now reformulated two or three adjectives in the text.

# lastedited 12 Jan 2009
20 Dec 2008

picture by twak: http://www.flickr.com/photos/twak/In recent years, mind-doping has moved from science fiction to commonplace, at least that's what they tell us about US universities. Here is a recent article from the german "Die Zeit" and it is also a scenario discussed in the book "Radical Evolution". Students take cognitive-enhancing drugs like Ritalin (there are many more, it is a hot market) to get ahead of competition during exams. US researchers are now demanding to talk openly about it, and are provocatively proposing to give all students access to such drugs, for equality. Neuro-enhancement will come, the only question is when, how and of course the price.

This is scary, not just because it feels not right to a lot of people. It also says that there will be a more direct mean to turn money into success. If you are rich, you can afford the best drugs to make you more able to perform, to concentrate on only one thing, and so on.

But maybe, hopefully, this is not the only road this can go down. Paul Graham recently talked about why he thinks a successful society relies less on short-term evaluations like exams and more on actual, long-term successes like finishing a project at work or founding a startup. A society should define its success in keeping direct influence from money to success down. He sees a difference in the US a couple decades ago to the US now - money hogs like elite universities and big corporations have become a little less relevant (meaning: they don't hold the monopoly on the road to success anymore) *.

There are some drugs that might translate into medium-term success - for instance, Provigil lets you depend less on sleep and is taken by military pilots. But for these things I see a reasonable middle ground. People who overdose will not have long-term success. It's just like workaholics tend to break down after some time.

I hope that the trend Paul describes is real and will hold. Like all effects we have on the world should be valued more by their sustainability, we also need to regard human work success under the view of sustainability. I don't see a drug at the horizon that not just makes you a better performer, but more "intelligent". For me, intelligence is a long-term concept.

* Of course Paul promotes startups as the new thing in the rest of this article. He is advertising his startup seed company. But let's not bash him for that right now. I like to think of "startup" as a broad term, and be less enthusiastic about them how they are today and then Paul actually sounds reasonable most of the time.

#
08 Dec 2008

Spoiler: nasty programming problem ahead.

I am currently developing on a Plone3 website. Plone and its base, Zope, have a lot of concepts from the programming world, layers of abstractions if you will. A layer of abstraction is designed to make your life as a programmer easier, but only if you understand that it's there of course.

Example:

I was developing an aspect of certain types in our project, namely that they can be bookmarked. With Zope(3), you can write an adapter. This class does nothing but add specific behaviour to another object. Now I can have a lot of different bookmarkable types, without changing all their classes in the same way (and no, inheritance is not an answer for everything). All I need to do is adapt them via an interface when I want to bookmark them, like this:

bookmarkable = IBookmarkable(context)

My adapter object will make sure that the adapted object has an 'bookmarked' attribute, a boolean:

def __init__(self, context):
     self.context.bookmarked = False

But oh! Everytime I'd adapt some object via IBookmarkable to work on its bookmarked attribute, a new adapter is created (via __init__()). So each time I adapt, I set the bookmarked attribute on False. So let's change that to:

def __init__(self, context):
     if not hasattr(self.context, 'bookmarked'):
         self.context.bookmarked = False

Now I was happy. But later, my unit tests indicated to me that whenever I bookmarked a folder object, the bookmarked attribute would in the same way be triggered for all bookmarkable objects contained in the folder!

After a while, I found that the code that checked for the bookmarked attribute triggered Zopes implicit acquisition. The acquisition mechanism uses attributes from objects higher up in the containment hierarchy when the object itself doesn't have it. So in the code above, I check for the attribute, thereby (through implicit acquisition) setting it to be the one of the containing folder. When I then later boomark the folder, I also bookmark the contained object, since they use the same boolean object. To prevent this, here is the solution:

def __init__(self, context):
     if not hasattr(self.context.aq_explicit, 'bookmarked'):
        self.context.bookmarked = False

And I was so close to bother the Zope mailing list - Now I am a bit proud to have figured the abstraction layers out myself :)

 

# lastedited 08 Dec 2008
28 Nov 2008

Helping your fellow citizens is often a little different over here in Amsterdam.

I live on top of a coffee shop. It's nothing more than a take-away. Get in, buy weed from a little window, get out.

1.

A young man approached me when I wanted to unlock my door. He gave me 5 Euros and asked if I would buy him weed (because he isn't 18 yet). I said yes.

I stepped inside, went to the counter and said "Weed for 5.". The guy placed three little bags on the counter and I took them. Turns out this is wrong. I should've chosen one of them.

I: "Oops, this is my first time."

He: "Did someone send you?"

I: "No, no, I just moved in..."

 

2.

The next day, an old woman in a wheelchair waited in fornt of the coffeeshop and kept knocking the door. "Mag ik u helpen?" I asked. I stepped inside (there is a small stairway in the entrance) and told the guy: "You have a customer here."

Whereas in other countries you help old ladies across the street, here you help them buy weed. Good times ...

#
28 Nov 2008

Dr. Moira Gunn interviews Keith Devlin, author of "The Unfinished Game" (22 minutes).

He talks about letters that two mathematicians -- Pascal and Fermat -- exchanged in the 17th century. They pondered over a really simple gambling problem: The unfinished game.

To describe uncertain future outcomes in numbers was Pascals simple answer - and it is the foundation of probability theory. From that followed the modern finance and insurance systems we see everywhere.

The interesting thing is that Fermat quickly came up with a simple solution that today everyone will understand. It's teached in 6th grade. But Pascal, one of the best mathematicians that ever lived, didn't get it.  Putting numbers on future events was a new kind of thinking. Framed like this, it's a powerful lecture about how culture changes thinking over short time spans.

# lastedited 28 Nov 2008
28 Nov 2008

In this lecture (video below) from January 31st, 2008, Lessig sums up his work over the last years that lead him to create the Creative Commons licence and start a grassroot movement against lobbyism.

You'll watch one of the best presentations I have seen, speech-wise and slide-wise. You'll learn how and why copyright has grown to be a serious threat to free culture and media dialogues in the last century.

I came to see this because I wanted to see Rip - A remix Manifesto (trailer below) at the Amsterdam international documentary film festival. But it was sold out. So I'll have to wait for it to be finished...

# lastedited 28 Nov 2008
14 Nov 2008

Sentence 5 on page 56 of "Collapse" by Jared Diamond:

All that because of a few small plants whose dangers were mostly unappreciated at the time and some of whose seeds arrived unnoticed!

Book meme:

- Grab the nearest book.
- Open it to page 56.
- Find the fifth sentence.
- Post the text of the sentence in your journal along with these instructions.
- Don’t dig for your favorite book, the cool book, or the intellectual one: pick the CLOSEST.

via Leah Culvers blog.

#
30 Oct 2008

I am currently on the BNAIC 2008 conference (The 20th Belgian-Netherlands Conference on Artificial Intelligence) in Bad Boekelo.

As I was (along with several others) greeted with a sign at the train station for the first time in my life, I might as well also engage in this new thing they call live-blogging. Here are some first impressions from the first day (I present my paper tomorrow):

In general:

  • The community of AI researchers here seems to be very nice and interesting. I'm looking forward to taqlking to even more people.
  • Some people have these new, fancy mini laptops. Nice... I want one (of course)
  • loads of formulas and text on the slides. Ten sessions in one day are hard to follow this way. The importance of presentation should be known by now, but here the scientific community is late.
  • Everything happens in a nice resort in the dutch countryside. Maybe I play some pool later :)
  • They have free Wifi. And it's completely open (granted, here in the countryside there a few abusers)

Here are a few notes of things I found interesting today:

 

Nao, the Robot
------------------

A french startup company managed to build a robot so versatile and so programmable that it got chosen to be the new platform for the RoboCup robo-soccer world contest. And - they brought some to Bad Beokelo to play. He walks like a robot god, has hands, Wifi and is designed for looks very well.

Isn't he cute? Notice especially how he gets up after being knocked over (but first he says: "I'm okay. I think I'd better get up"). The motion looks fabulous. He has 25 degrees of freedom and is bipedal, helped by an inertia sensor.

He is not smart just by itself, but very easily programmable, as I saw live. See the company website for more. On the dinner tonight, we had a band playing folklore music and they put a Nao on stage and he danced a japanese dance someone programmed for him. A-dor-a-ble!

The guys said they already used Nao with autistic people. Maybe it's also nice for elderly people and children, but first it will be sold to researchers, since it still gets developed.

 


Keynote by Ruth Aylett: "Interactive Storytelling - emergent narrative or universal plans?"
 

Her group works on Interactive Storytelling. It tries to integrate common planning theory with all these ideas.
What are the necessary ingredients of a story? You need a world and people, and they have to change each other. And a dramatic trajectory.
The levels of a framework for this are the media used, the plot (the tale) and fabula (everything that actually happened). In interactive storytelling, all levels are updated simultaneously. This needs heavy planning.
They also use "Emergent Narrative" as a bottom-up approach. To only have interesting characters is not enough for global goals... (but they can take part of the burden of generating new plans for actions).

Nice sources:
Riedl & Young: Plot repair (when user threatens plan). Is repair believable? If you want to shoot Robin Hood and try 20 times, and the system makes you miss each time because he can't die, that's not believable.
Mateas & Stern: Library of Story fragments: Facade Beats
Cavazza: Plan Trees

Goals are not enough for drama. You need emotions. Being rational alone is not interesting (drama). They elicit world-directed and/or emotion-directed responses. Ortony et al (1988) have a nice grouping of emotions.




Mihail  Mihaiylov: Collective Intelligent Wireless Sensor Networks

In Sensor Networks, nodes decide themselves what to do and they have to be careful about their energy. A network should have good data responses, but also live long - a tradeoff.

A waste of energy are the following situations: Idle listening time, Overhearing, Collision of signals, control traffic. Just going to sleep sometimes to reduce the listening cost is bad for response times. Basically, the intentions of single agents  (live longer) are bad for the system.


Solution: A node only cares for a set of "affected nodes" - i.e. it doesn't care about everyone, but only about those whose cooperation you'll need soon (because they route your signals to the base station). Learn their energy level (yes, that's control traffic, but not much).
Nodes learn their own tradeoff between sleeping and taking actions. As a result, the system is then a little less responsive, but the lifetime is much longer.

 

Update 01.11.

I gave the presentation and everything went well. I got a lot of questions and some were really interesting for further research. I uploaded the slides.

Also, I must say I noticed something else: While the people here in Netherlands are pretty diverse skincolor-wise, this wasn't the case at the conference. All in all it was 155 people: a lot of white men, some white women and three Asians

# lastedited 01 Nov 2008
13 Oct 2008

I am following the development of Amarok2 since a week. It's near completion, though still a little buggy. The second beta was just released.

I noticed that if you use a software everyday, you build a relationship. I donated some Euros to the Amarok team yesterday and I decided to run the beta despite some crashes since today. I do appreciate a fresh spirit in my music listening. Thanks, Amarok team.

Now. The main concept of the latest KDE 4 software is to have widgets (I especially love Folder Views). So in Amarok, too, you get an area (between media area on the left and playlist on the right) which you design yourself. Well, at least you decide which widgets you want to have.

In this screenshot, you can see that I opted to see the lyrics for the actual track and the Wikipedia entry for the artist:

Especially the widget area could use more programming love (You can zoom in, but not in the nicest way yet. Some widgets still crash.). But I must say that with KDE4 I became a real KDE fan. Thanks to Pete for encouraging me to try again.

P.S. The playlist there was assembled randomly. I skipped Robbie Williams.

# lastedited 13 Oct 2008
30 Sep 2008

 

stolen from realvrou

#
23 Sep 2008

Some weeks ago, three friends visited me in Amsterdam. We rented them three bikes at orangebike. After a long pub night, we locked the bikes and got on my boat to have a last drink on the calm water. We witnessed a guy walking up to one of the rental bikes, unlocking it somehow and driving away with it. It took 30 seconds. Sadly, we needed a minute to reach the shore and couldn't follow him.

In the end, we had to pay 240 Euros to orangebike. Had we opted to pay three Euros more per bike, the we would have been insured and only payed 50 Euros. We didn't do that, sadly.

Why? Because we lacked information. Most people who have to make decisions about a risk within a new, complex situation, lack information. That's what the ungodly high profits in the insurance world are all about.

I live in Amsterdam for a year now and have not seen a bike robbed or heard of a friend being robbed. I thought locking your bike is enough. Well, obviously it depends on what bike you have. I suspect a difference in risk between private bikes and rental bikes. Because after we came back, the guy at orangebike said that ours was already the fourth stolen orangebike on that same day. And only in that office (they have several in Amsterdam) Thanks for that extra information! You lose several bikes a day??

When you really know the stakes of all players in a market situation, you get a better picture (like mine I know have and sketched below*). But if you lack information, betting on a risk is unfair.

As we open up so much information these days, I would really like to see a public database, fed by everyone, for information attached to risks. Information like the one I present here about rental bikes in Amsterdam, but somehow organized. Imagine you come into a new situation, like having to decide wether to insure yourself against some risk, and you get the experience and information from a lot of  customers and other involved stakeholders from the internet.

Risks, and the bets we have to place on them, are a really essential part of our lifes these days, and horribly abstract. It goes wrong all the time (look at the banking crisis). I am close to saying that it might be one of the great issues that the information age should tackle.

Not because of bikes. Although bikes are really important in this city. As you see, I take this issue pretty seriously myself :)

 

* Everyone but the tourist wins. Obviously, orangebike doesn't really care to have the best locks or give you a second lock for the wheel (like some other services do as I learned). They'll get their money for a new bike anyway. And as tourists normally leave the town the next day, they don't have to care about customer satisfaction. And for thieves it makes sense to go for the rental bikes. They can specialize on only one kind of lock, and all orangebikes use the same and are easily recognizable. You can walk up to one of them and break it if you are trained on the lock. And as we saw it, the whole processs looks like you're unlocking your own bike. Lastly, the insurance would stop insuring rental bikes if they wouldn't make a profit. Luckily for them, bike rental services don't really inform customers and so they don't really have to pay out recoveries a lot.

# lastedited 05 Oct 2008
22 Sep 2008

I am currently doing a literature research on Wireless Sensor Networks. Basically, those sensors can be very small and cheap and measure everything from your heart rate to wildfire occurences or enemy combatant movements.

This is an exciting research area which is based on how the communication between those little guys should be done. I like it because they provide simple models whith which decentral communication can be simulated, but these models are also meaningful, because there are applications around the corner (and you might even get to try them out in reality if someone gives you a bunch of those sensors).

Well, the assumption is that Wireless Sensor Networks will be abundant in 10 years. Put them in cloths and you have smart underwear. Throw a million of them off a plane and you can have a lot of cheap and error-resilient knowledge of the environment.

Well, at least until all their little batteries are down.

My point today is: What if we throw a lot of them off planes and use them until they are out of power? What if we do this a lot? Seems likely. It's so damn useful, in every branch of life, from military to environmental observation.

Maybe Wireless sensors will be the junk of the next decades. They might be everywhere, lying around, polluting, rotting. We are going to tell our kids not to pick up old wireless sensors and the vets will get them out of our dogs intestines.

 

#
27 Aug 2008

We just got mail from the Belgian-Netherlands Conference on Artificial Intelligence:

Dear Nicolas, Tomas and Martijn,
we would like to inform you that your Regular Paper entitled
"Beating Cheating: Dealing with Collusion in the Non-Iterated Prisoner's Dilemma"
is accepted as an oral presentation for the BNAIC 2008 conference.

This paper is a follow-up work on my paper that I presented at the student conference in Utrecht, but to have a paper accepted at this conference is a much bigger scientific honour :) I thank the anonymous reviewers for their helpful comments and look forward to meet interesting people on BNAIC at the end of october.

Update 01.11.08: The proceedings are out. I give you  the paper  and the bibtex.

# lastedited 10 Nov 2008
23 Jun 2008

My favorite futurologist, Jamais Cascio (see the "Participatory Panopticon" vision), recently gave a talk in San Francisco. He talks for one hour, starting with education, but moving to almost all topics that he is interested in right now. He says the talk is about "his work in progress". Some of the thoughts are not as new as this year (e.g. see my review of "Radical Evolution" by Joel Garreau), but some I've seldom or never heard. I'm looking forward to hear Cascio's new scenario that might come out of this.

I started taking some notes halfway through the talk when I noticed that some interesting ideas of Cascio were popping up, rather than a coherent picture. Here are his ideas that I remembered:

  • A future of education: Every students creates his/her own curriculum totally from scratch?
  • There is a recent price drop for molecular 3D Printers. Looks like the price slope for other technologies, e.g. CD burners. They will be really affordable in a few years.
  • What if we could have wearable pattern recognizers? A really vague concept right now, but what a great idea! Best thought of the talk.
  • Some researchers notice an increasing ability of modern humans to think complex (by themselves). All new media stuff from the last 60 years played a role in that (albeit being called "junk food" for the brain, each at its time).
  • We will want to have personal data security in a world that will undergo physical changes (think climate change). Dislocation will happen to a lot of us.
  • Revisiting the Participatory Panopticon: How will it influence our daily storytelling when every detail is seen and we maybe even can have two or three views on it?
  • Physical augmentation will soon be for everyone (who has money, of course). Well, I knew that, but the place to pay attention to right now is the development for the disabled. Their stuff is already so cool (e.g. legs, eyes, ears) that "normal" people are starting to want them these days because they are better than our natural set!
  • On the non-material augmentation market (think physical drugs), this has already happened. Students took mental drugs (e.g. attention enhancers) to learn better. Now they arrived in the workplace. Be ready to compete against mental doping - on your job.
  • Collaboration, bottom-up Developing, Open Source - these concepts are moving into more workplaces, but also into politics. Warfare think tanks discuss why terrorism is so successful. It is because it already embraces these terms. The big players start thinking how they can play this game, too (but still remain a big global power). I read this first on Jeff Vails site (Terrorism in a Post-Cartesian World: PDF, Network Defense strategies: HTML).

 

# lastedited 23 Jun 2008
16 Jun 2008

It's weird that when someone dies that you never met can ruin your day. Esbjörn Svensson died while diving 2 days ago. I have spent countless hours listening to his music, while working or chilling. I am very grateful.

 

I saw them in Oldenburg last year and they were awesome. So into the moment from the first second on. I'm sad now.

#
08 Jun 2008

Funny thing that Germans as well as Englishmen should understand this dutch title :) - which I am not totally sure is correct, I need more practice.

Anyway, so I was a speaker on this student conference, organised by the dutch society of AI students. All other presentations were in dutch, but the organisers were kind enough to let me apply even if I only could give my talk in english. And they accepted my paper, which results in an official publication in the conference proceedings.

The project which this is about has been done last semester in the course "Advanced Self-Organisation" by Martijn Schut. It deals with trust-building in complex markets. My partner on this was Tomas, but as he is currently in China learning to play Go, I wrote the paper by myself. Maybe we'll work on the topic some more...

Last Friday, I went to Utrecht and attended the conference, which was a nice meetup and my talk was well received.

#
31 May 2008

Websites are getting heavier by the years (and this one is no big exception). As people crave nicer designs and functionality and internet connections are getting faster, our web servers include more HTML, CSS, Javascript and pictures when they reply to a page request.

But: Not everyone's internet connection gets faster at the same rate. Some people are stuck with modems, for instance a lot of places in Africa. The internet becomes more and more unusable for them. It's called the Digital Divide, and it's a shame, basically. It's getting bigger for them as web pages grow heavier.

No one puts in extra work to maintain light versions of websites so that these people can conveniently surf them. They don't have money, so they're not important.

But if rich people need light websites, stuff will happen fast. Luckily, with the advent of internet-browsing phones like the iPhone, a lot of big websites offer mobile versions. Now that is something for the rich and, by luck, also for the poor.

So, Africans or other people with low bandwith: Check out mobile Reddit, mobile GMail, mobile Facebook, mobile MySpace and others (I just googled for 2 minutes)! Save yourself a lot of mind-numbing page load time. Note: Some webservers see if you actually use a mobile phone browser, like mobile Spiegel, but you can fake what browser you are using.

# lastedited 08 Jun 2008
08 Apr 2008

On my website, I have a widget showing the articles I had recently starred with Google Reader. That widget  stopped appearing recently. After some research, I found out that  the javascript error  "opera.version is not a function" is caused by my scripts initialising a variable with the name "opera" (thanks to David Buchmann for his writeup).

You see, to render nicely in Opera, the Google javascript wants to work with a variable by that name. If it runs in an Opera browser, Opera will provide that variable. If not, it should be "undefined" because it doesn't exist. Unless, of course,  my script (or any other that ran before) instantiated a javascript variable "opera".

Well, my scripts did (it's a natural variable name to come up with if you want to store somewhere that the browser is/is not an Opera browser). I renamed that variable and the Google Reader items show up again.

Here, we have a nice first-hand experience why namespaces are a great idea. But as the web grows in an organic manner, we have to take what is there.

Lessons learned?

  • Name variables in your code more specific than general. For instance, isOpera would have been better than opera.
  • When there are no namespaces defined, the big eat the small. If your script is less important than, say, Operas or Googles, then watch the hell where you're naming, little guy.
# lastedited 08 Apr 2008
17 Mar 2008

 

There is a beginning discussion that the wealth difference between the West and other parts of the world is not just a result of mere aggression. And it is not only noticed in the West. As others realize their shortcomings, the West has to work even harder.

Don't get me wrong, there is aggression out there, but a lot of people recognize that it doesn't make up for the whole equation. Writes Jeff Vail:
 

I can think of one justification that remains to some degree in America—we still seem to be better than anyone else in the world at combining just the right mix of structure and freedom, discipline and creativity, conformity and rebellion to create the kind of synergy that drives business innovation. That, too, seems to be changing, but for now I think it still justifies some kind of multiplier—but this mix is also subject to the forces of globalization.
It is too few people who see that this cultural difference is not only something we should defend and strengthen (e.g. against islamistic threats) for moral reasons or because we owe it to our forefathers who achieved all this. It is our most important asset in the game of wealth. It is what sets us apart and makes our wealth possible. And it is no constant. It can go away.

 

However, I am not posting this to make that point yet again. I came across two sources that show how this reasoning makes its way into scientific discussion:

Study finds anti-social punishment in more traditional societies, dragging profits down

A recent study by Herrmann, Thöni and Gächter (published in Science magazine) conducted cooperational games among students in a world-wide experiment. Groups that cooperate can maximize their profits, but individuals can also defect the others. In addition, punishment was possible. The paper can be found here (without paying a Science magazine fee) and a comment that also appeared in that Science issue is here. It concludes:

 

A few subjects, when punished, rather than contributing more, suspected that it was the high contributors who punished them, and responded with antisocial punishment: They punished the high contributors in future rounds, leading the latter to reduce both their contribution and altruistic punishment. Herrmann et al. collected data in 15 countries with widely varying levels of economic development. The subjects were university students in all societies. The authors found that antisocial punishment was rare in the most democratic societies and very common otherwise. Indexed to the World Democracy Audit (WDA) evaluation of countries’ performance in political rights, civil liberties, press freedom, and corruption, the top six performers among the countries studied were also in the lowest seven for antisocial punishment. These were the United States, the United Kingdom, Germany, Denmark, Australia, and Switzerland. The seventh country in the low antisocial punishment group was China, currently among the fastest-growing market economies in the world. The countries with a high level of antisocial punishment and a low score on the WDA evaluation included Oman, Saudi Arabia, Greece, Russia, Turkey, and Belarus.

 

 

China reassures scientists not to fear failure

This news is big:

 

China will tolerate experiment failures by its scientists to ease pressure, encourage innovation and cut the chances of fraud, a top official said on Thursday.

 

China recognizes a cultural shortcoming in its scientific community. The ratio of worthless or even wrong studies is far higher than in the West. As far as I know, this is not rooted in the very old chinese history. But at least in the last 100 years, China has built up a dangerous perfection cult. Dangerous for its own progress, that is. It is politically remarkable for the change China is going through that this is acknowledged.

 

So, while the Middle East suffers from its honor culture, China wrestles with perfection cult.  I am not saying that we don't have cultural problems on our own. My point is:  (at least some) others are working on their self-understanding. Let's start working on ours again.

* image by tarotastic

 

# lastedited 21 Mar 2008
10 Mar 2008

I recently came around to get pictures off my mobile via bluetooth. They have been locked in there a long time.

A lot of them I had completely forgotten or never looked at on a real screen. The camera is not very good, but sometimes I obviously saw things that the world needs to see anyway. Here a some jewels:

In the library of the university of Osnabrueck: Lexicon of nutrition. Volumes A to Fat, Fat to M and N to Z.

On the train station of Muenster, Germany. If you would really spell Osnabrueck like that, it would sound as if Adolf Hitler says "Osnabrueck".

Seen in the window of a turkish supermarket in Osnabrueck: "Smartklamp: Circumcision with one click!" If this is satire, it is damn good and the place of action has been well chosen. If not, I have nothing  to add.

Update: Jan found out that it seems to be for real. Ugh. But at least the company's site is down. Maybe a rabbi or whoever does it is still the way to go.

One of my first trips to Amsterdam in search of appartment and a new life. But the Blue Screen Of Death is everywhere! I bought my ticket elsewhere.

This is how you transport kids and other stuff on your bike in Amsterdam.

 

# lastedited 10 Mar 2008
23 Jan 2008

Looking at this patent from 1914, you got to admit it: The Americans had ad placement in their hearts since day 1.

It's genius: A scarecrow is an attention attraction device. Why waste all this potential only on crows? Since humans will also look, display an ad!

Samuel Hunter, I salute you! I wonder if it made him rich or if he got chased out of town :-)

Oh, don't forget to look at the drawing.

#
16 Jan 2008

What would happen if people lose trust in Google, success-wise? If they stop believing that the good people at Goole will keep their servers running forever, let their spiders crawl to get the latest internet content and make sure that the search results are excellent?

When people lose trust in the success of a shareholder company, they try to sell their shares before they become worthless, which makes them worthless really quick. That could happen to Google one day, of course.

But there would be another dimension: Everyone would rush to get his data from Google before the servers are dead. Gigabytes of Gmail conversations, Google Analytics statistics, Google Calendar Entries, Google Reader collections. Imagine Google does have technical problems, and then we all, at once, try to get our data back. Then they have technical problems (Even if they don't have technical problems I think it would be hard to survive such a data rush). 

# lastedited 10 Mar 2008
29 Dec 2007

Google has published  plans to build a Wikipedia rival. The key difference: There can be more than one article (a "knol") about a topic. One article is related to one author and people can approve to it by voting. Googles ranking algorithm skills shall assure that people will find high-quality articles.

Now, I think that this has great potential. For over one year, people complain about Wikipedias admin elite. The discussion pages seldom became more than a fist fight over who is right and it is never obvious if controversies exist. The problem seems to lie in the system.

We have to face it: Sometimes, there is no general consensus reachable. That is a wrong idea from the seventies. In five years, we will maybe laugh at the idea that people on the internet could agree on one way of explaining something.

As I write this, I suspect some highly-skilled Googlers are discussing how to rank those "knol" pages. Controversial terms like "facism" will have a lot of articles. When I search for "facism", what will Google show me?

What I am heading for is this: Suppose that 90% of the people agree on one type of definition, while 10% favor another (Note that I say "type" - several articles can all define the term in basically the same way). There is a controversy, and I - as someone interested in the term - am interested in learning about it. I want to see both sides of it and then decide for myself what side I am on.

A stupid, popularity-only ranking algorithm would only show me terms from the 90% bulk. But 10% is a strong minority. I say: Put both sides on the top page, Google. Use this new way of having user rankings and identify if users fall into distinct groups, upvoting distinct types of articles. And then, don't bury the minority opinion. This might be a new way to show knowledge on the internet.



P.S. Another thing comes to mind: Maybe we will have to get familiar with amazon-like greetings concerning opinions: "People who share your opinion on X, also think that Y."

# lastedited 29 Dec 2007
02 Dec 2007

Wow .. this is exciting news:

The 419 Scam aka  Nigeria Connection spammers now don't only send out weird english mails (telling you to send them money so you get rich by helping them to transfer billions of dollars somewhere).

I have just received my first weird german 419 scam mail! It is an honor to the german economy to be recognized as worthy and dumb enough to open a whole spam division devoted to bad german spam mail writings. They even gave me a hotline to call (which, by the way, is in Belgium)! The german is very bad indeed, but they are still in the beta phase I think. They will learn from customer feedback. Kudos!

Here are some snippets from this exciting and hilarious new product:

 

Als unser Vater hat angefangen Todebedrohungen empfangend, hat er mir erzählt, dass er die Summe von 6.400.000 Dollar (SECHS MILLIONEN VIER HUNDERT TAUSEND AMERIKANISCH DOLLAR) in einem Finanz Sicherheitshaus in Brüssel-Belgien deponiert hatte. Er hat erzählt, dass mich jener inacse irgendetwas zu ihm geschieht, sollte ich versuchen und soll in Kontakt mit der Sicherheitsfinanzierungsgesellschaft ankommen, und dass er mich der nächste Angehörige gemacht hat, und dass der Inhalt als FAMILIE WERTSACHEN erklärt wurde. 

Wir haben entschieden, Ihnen 20% von der gesamten Summe für Ihre Hilfe anzubieten, und einzugeben. Sie werden auch uns mit Ihnen zu Ihrem Land nehmen und werden uns helfen, das Geld für uns zu verwalten, während wir unsere Studien fortsetzen werden.  
Wir sind diplomatische Reisepässebehälter, damit wir kein Problem haben, überhaupt zu irgendwo in der Welt zu reisen. Bitte Erwiderung zu uns durch unsere private E-Mail-Adresse, die ist: Johntumba2007@yahoo.com
Wenn Sie uns rufen, kann ich Deutsch mit Ihnen sprechen so es ist kein Problem.
Unsere Telefonnummer hier ist 0032448349849.

 

Germans, look out for your parents!

And Gmail, I see new work for your spam filters ahead.

 
# lastedited 02 Dec 2007
19 Oct 2007

During the 77 year celebration for the institute of sciences ("Exacte Wetenshapen"), I took part in a Salsa workshop. Now, pictures emerged on the interwebs. I am the one with the black hat.

Thank god, I am not making terrible mistakes on any of them :-)

It was fun, though. And a great buffet.

[pictures from fotorene.nl]

#
24 Jul 2007

Wow - this is absolutely impressive. It is something I'd like to study in my Master's programme. And - as I am currently preparing for my examination in Philosophy Of Mind - Can you imagine what it is like to be a part of this? How is it like to be a starling?

#
10 Jul 2007

For my Bachelorthesis (I write it using Latex), I wanted to have little bordered textboxes that would float next to the main text.

They should explain terms that are mentioned in the main text, so that everyone could read more about that term if (s)he needed to.

It turns out that this is no easy task with Latex. All standard elements I tried would either not float very good, have no border and/or failed to wrap the lines within the info text.

I spent three hours to come up with a nice solution, so it might be useful for people if I share it here.

I defined a new command in the top of the document:

%%%% Custom Command for floating Infoboxes
%%%% usage: \infobox{<title>}{<text>}
\usepackage{picins}                 
\newcommand{\infobox}[2]{
    \parpic(0.34\textwidth,0pt)[lf]{
        \parbox[b]{0.32\textwidth}{
             \bigskip {\bf #1}  \small{{{\sffamily #2}}} \bigskip
        }
    }
    \bigskip
}

The picins package is normally used to place pictures within the text, but when I place a \parbox inside, it works really nice for text.

To create such a infobox somewhere in the document, you just use it like this:

 \infobox{XSLT}{
    XSLT is a stylesheet language that can parse XML files and transform them. The output will be another text file, possibly XML. It offers a lot of capabilities as it is a fully functional programming language.\\
    Like XML, XSLT has also been specified by the W3C consortium.   
  }

Here "XSLT" depicts the bold-faced title within the infobox and the second argument is your info text. Here is a screenshot that shows how a result will look like (I bold-faced "XSLT" in the main text myself):

 

The text block that the infobox floats around is simply the one it precedes in the Latex document. You can also have the infobox on the right. Give the parpic command "rf" instead of "lf" as an option.


The picins command is praised as a really nice package, but it has some problems: If the remainder of the paragraph text is insufficient to fill the area to the side of the infobox, the text from the following paragraph will run through it. It also won't work with enumerate/itemize besides the text. Both of these issues can be fixed on a case-by-case basis, but it can be nasty...

Feel free to drop me a comment if you like it / can do it better / can make it more beautiful.

 

Update (05.08.2007): I made the height of the infobox independent of the text length and used sffamily for the font. I now also mention some problems with picins  that I (and others) ran across.

# lastedited 05 Aug 2007
10 Jun 2007
For my web CMS "Polypager", I recently put a CAPTCHA mechanism into use (CAPTCHAs help to block comment spam by asking commentors to identify text from pictures, a task where humans are still much better than computers - I wrote about them in a previous blog post).

This is what it looks like:



It's a cool one, a service by Carnegie Mellon University which lets people identify two words. One is to make sure you are human and the other helps digitize books from libraries: Sometimes the software that scans in those books is not sure what to make of a word so people who want to write a comment can help.
So in this example, typing  in "overlooks" may assure my CMS software you're a human while typing in "inquiry" helps to identify this word, which is a problem for the book scan machine of Carnegie Mellon (it could also be the other way round, who knows?).

For a year or so, I had tried other tricks to prevent comment spam, requiring no extra work by the user. But recent developments by the spammers seem to have taken those hurdles. As hundreds of spam comments rolled in, I had to take some action.

Anyway, though it's a really nice idea and all, I do fear that this secret from postsecret.com is really wide-spread:

A secret from www.postsecret.com

I think the point behind this secret is that simply reading and entering text is too dumb a task - solving captchas should be fun, not work. Even the Hot-or-not approach is more work than fun. It's always the same task. Repetition is never fun for humans. Only machines like it.

Let's crunch some numbers: The recaptcha guys claim that "About 60 million CAPTCHAs are solved by humans around the world every day." Say that one of them takes 5 seconds to solve. That amounts to more than 3,400 days of time - and it seems that people regard this time as labor!

What ideas could we think of that make assuring that you're human more fun - like a little 5-second-game?

Here is one: You heard of this place called Web 2.0 where thousands of users generate content and the site creators do nothing but provide the platform?

Why not create a CAPTCHA - platform like this?

Users submit little riddles, you know, obvious ones that you can solve really quick. Just a little picture and a one-word solution to it.
Here, I made a quick example (using Incscape):



I know, it's not beautiful and not funny. But it's different. Everyone will submit different riddles. That's what humans like: "Gee, I wonder what kind of riddle I get this time!".
The fun part here is that they all come from different sources. Each one is different: some are funny, some have a message, others are beautiful ... you get it.

Let's assume it would work in the Web 2.0 manner and thousands or millions of riddle - CAPTCHAs come together, new ones all the time. They would be very, veheery hard to crack using machines. The spammers would have a hard time defining the problem space. They just wouldn't know what to adjust their algorithms to.

I hear you say that people would never submit riddles to a platform in satisfying numbers - but that is the same argument they brought forward when Wikipedia and YouTube started.

So, the riddle CAPTCHA platform already has some Web 2.0 features like user-created (and user-owned) content. We don't need to add social networking, but let's add the Architecture Of Participation:

Riddles could be ranked. Each commentor is asked how (s)he liked the riddle (this requires just one click and is not mandatory). This would create the popularity contest that Web 2.0 people like so much. However, the popularity of a riddle would not be reflected in the number of times it gets used. That would be too easy for the spammers.

Alright, that's how far I'm pushing this idea forward for now.
If anyone has objections why this is a bad idea, please comment.
# lastedited 11 Jun 2007
06 Jun 2007
EEG
A real Cognitive Science souvenir is a photo of yourself  while participating in an EEG study. I just did my second one, which had me looking at animal pictures, listening to animal sounds and hitting a button when the right animal appeared.

Here are my souvenirs:

  
# lastedited 19 Jul 2007
20 Apr 2007

Just a recommendation: great talks from the TED conference are now freely available.

I just watched Ze Frank and he is hilarious as always (to me, at least). Go have a look, it's only 18 minutes.

You might also want to check out Malcom Gladwell.

Plus, you can just get the audio if you want. I just downloaded Peter Donnelly (How juries are fooled by statistics), Helen Fisher (The science of love, and the future of women) and Jane Goodall (What separates us from the apes?) to my ipod :-)

# lastedited 20 Apr 2007
12 Apr 2007
I just watched the Google tech Talk about the new Yahoo Pipes Service.
It's very cool (I also liked to hear about difficulties when using the Javascript Canvas feature).

Basically, you can combine RSS feeds (which is a standardized XML format) from different web pages and do stuff with them. You may want to filter for keywords. Those keywords may in turn come from another RSS feed. You can generate results for specific problems very quickly.
The working example in the talk is to find apartments on Craigs List whose address are in Palo Alto, California within a range of 1 mile to a park.

Here is another example showing off their great in-browser editor.


A great idea.

I kept on thinking for two minutes and had the idea that you might make this usable without this editor you see above. You see, they made it so good you can almost read out loud what is happening in the pipe.
It should be possible to translate Natural Language, or a subset of it, to a Yahoo Pipe. How great would that be? Ok, a subset of Natural Language to describe queries... hey, that's SQL!

Then I googled for "yahoo pipes sql" and found out I wasn't the first with that idea. Of course not.  It's a great point and I'm not ashamed to emphasize it again.
# lastedited 12 Apr 2007
10 Apr 2007

It happens to me once in a while that I get the feeling to be overwhelmed by the complexity of all the electronic stuff I am working with.

Photo by Victor Nuno (http://www.flickr.com/photos/victornuno/495355673/)

Programs that I use as tools, libraries that I use to write code and the source code I write myself. They interplay. They may work or they don't. It may be my fault or not.
And there are those thousands or maybe millions of tools and libraries out there that I could use but haven't heard of. Maybe I would find the perfect solution within the next click on Google? Where is it?

This article by the creator of one of my favorite leisure time surf sites brings another issue to the point: Software projects, ours included, fail way more often than we like to admit. OK, we felt that this true. Building nice, reliable software is hard. It's somehow an unsolved problem.
But here is the punchline: You can call a project successful when it lasts 15 years.

15 years. Somehow I knew about this, but I never consciously thought about it. It's true. Today, especially in information technology (but not exclusively), putting something out there that lasts for 15 years is almost all you can hope for.
It happens if you're really good.

In five years, there will be new ways of doing things and your project will be considered old. They'll probably think your code has that old-man-odor and it takes another two years until someone finally rewrites it, but that will be it, mostly.

It's stuff to make you throw the keyboard away and run away to start a new happy life in the forests.
With software, it feels like the effort needed to be put into the work to make it last a few years is ridiculously high.

But then again, maybe we can take joy in this modern craftsmenship again by  accepting that it's faster, but it's not so different. I love this story from David Humphrey, who swims through the ocean of Open Source software trying to make sense of it. He learned about his grandfather and found out how much the things they do/did are/were alike, even with 50 years difference:

[ed: Like him,] I take things that were never meant to go together and work with them until they fit, patiently refusing to give up - even when it’s not clear I can make it work. Where she and my grandfather worked with pump motors, I work with source code.


Maybe not even the success rate changes so dramatically. As more programs get written (aka. as more tools are made), there is more success and more failure, both of those numbers go up.
What changes is the time frame. It's faster.
And also the standards rise. Less people feel they need to keep that ol' piece of software when there is so much innovation out there.
It's the way of the world.

However, maybe we are telling our kids the same stuff again one day: "When I was young, you could fix or build most of the software yourself. Today, it seems impossible!! *cough*"

# lastedited 23 Sep 2011
26 Mar 2007
I was browsing through my Google Page statistics lately. Gosh, I didn't know I would be found primarily for terms like that. But let's begin from the start.

I include some Java Script code on my page. This code informs Google about visitors on my page. That way, Google is able to tell me how many people came to my site. Or where they came from. And sometimes what they were doing before.
You see, when you search for something at Google or any other search engine, you visit an URL like this:

http://www.google.com/search?q=fluch%20der%20kindheit&hl=en

The search query is right in there. Here, it's "fluch der kindheit" (german for "curse of childhood"). Hold on to that, it's gonna be important.
When someone clicks on a link that was presented as a result for that search, the server that hosts this page will know which was the last URL you visited before (it's called "last referrer" or something like that). This way, Google knows where my visitors came from.

Now, Google can tell me for which search queries my page was listed at good positions, say in one of the top ten.

It turns out that when people, for whatever reason, google for "fluch der kindheit", my page turns up at number four, or near there. Yippie.



Guys, my childhood wasn't that bad. I was only joking there. I have to study harder so one day I will program a sense of irony into Google :-)
# lastedited 10 Apr 2007
01 Mar 2007
Here is some eye candy for those of you who like to see robots in action.

Here is the very first walk of a robot called Dexter.
It's exciting because he is a biped robot who balances dynamically. That is a really hard problem. It's this constant-falling thing we humans call walking and do so easily. Anybots, the company who makes Dexter, has worked 6 years on this success.



This second movie shows robots in a brandnew experiment about evolving communication.
Researchers let robots learn how to use their sensors and motors to get to food (the apple logo) and avoid poison (the pirate logo). Here, "learning" describes some way of artificial evolution: The robots started with little randomized motor programs. Those evolved over a lot of trials. They kept the best performing motor programs and let them mutate a bit before copying them on all robots.


What you actually see here is the point of the experiment: The robots could communicate. Under some circumstances, they emitted light. They decided themselves when to do it. Some populations of robots would blink when near to food, others when near to poison, emitting a call or a warning for other robots, respectively.
In the end, codes of communication evolved. The benefits outweighed the costs (time, energy, used motor circuits).
# lastedited 01 Mar 2007
28 Feb 2007
Since almost two months, I spend a lot of time in the Institute for Neurobiopsychology at my local university, programming a tool that helps researchers create models of human attention.
I'll explain that tool some other day, but it's very exciting and almost finished.

Today, I thought I'd share what this place looks like.
It's here:

This is where I sit:


Find more pictures of that ugly building here. The beauty is not in the surrounding, but in the work I do .-)
# lastedited 28 Feb 2007
07 Jan 2007
The resolution of the images Google uses for its Maps and the famous Google Earth are getting better all the time.
Boston is the city where the pictures are the best. At xrez.com there is a big zoomable picture of Boston, and this little part of it shows just how fine the resolution is these days:
# lastedited 09 Jan 2007
21 Dec 2006

At the moment, in my attempt at Interactive Storytelling, I'm thinking about storyworlds. Worlds, in which stories can be told.
My topic of interest is now language.



(Wait,wait,I'll explain the picture later)

Language, normal written language, is the only real material I am working with.
It is the mind and the matter.
In advanced applications, you also want other types of language, like body language or facial expressions. They will make experience complete. But basically, they don't carry the meaning. You can read a book and use your imagination to enrich the meaning with experience that way.
Written language, therefore, is not a problem that I'll handle at the beginning and then can go do other interesting stuff. I will deal with it all the time.
As will become clear later in this text, every single time I think of what the system can or should do, I will return to the language to make it sayable. So, in a way, such a system has some kind of Sapir-Worf hypothesis as inbuilt premise (that's a famous and much disputed linguistic hypothesis saying that you can't think what you can't say).

The point here, and that is what makes this task so interesting to me, is not to come up with a system that can communicate about anything in any style. I now about the infinity and complexity of language. It is out of my respect that I know that the best thing I can do now is a great illusion. My success is measured by the difference between the effort I put in and the experience I actually accomplish.

In a way, you can compare Interactive Storytelling at the current stage to the movies a hundred years ago. The technology was only evolving, but you could come up with ingenuous methods to create an illusion that really entertained people.

For instance, the "Ames-Room" trick to make one person seem ridiculously small compared to others, like Jim Carry in "Eternal sunset of the spotless mind" when he goes through his childhood again in his dreams. In the bonus material on the DVD they explain how they did it. They were not using computer effects, they were doing it the old-fashioned way.
The way the innovators of cinema did it 100 years ago. They build a room that changed perspective by narrowing walls. If you see it from one perspective, people that are standing 10 meters apart actually seem to be next to each other. But of course they are seen in different sizes. This trick is old, but still used.

Maybe some tricks that are invented for Interactive Storytelling these days will still be used 50 years from now.

Ok, back to our actual technical problem: language. Language makes it possible to say anything. We don't want that.
In fact, the truth is, we can't handle that. Because every sentence must be transferred to mean something.
When the user wants to say:
"I kick Fred"*
we must have a Fred represented in our system and calculate his hurt he experienced by the kicking. Earlier, we would have to know that "shoot" is a verb and it goes with one transitive object.
Or maybe we said it can have two:
"I kick Fred in the butt"
So now the first transitive object declares who we kick, and the second declares where we kick him/her.

So that's introducing the structure problem. That's what grammars are for. The trouble is only just beginning.
Can we only kick people? I could kick a dog, too. Or a vase. Do I need an ontology of things in the world, so that I can kick anything that's build of matter? That relates to the problem called "semantics".

Now to the really hard stuff: Meaning. If I kick the vase, what does it mean to Fred? Is it his vase? Does he like it and should now be sad? Am I showing aggressive behavior and he should be fearful? Or are we drunk teenagers and it actually doesn't mean anything but fun (and a little danger of getting caught)?

That's a hell lot of stuff to chew on. Some might say that more computer power and advances in language processing technology that rely on statistical methods on large natural language corpora will someday overcome those problems. Ok, some things will change. Translation, for instance, is on the edge to become really good within a few years. But at least the meaning problem is different. No brute force will solve it.
This is were some really smart engineering should do dome magic. Not to solve it, mind you, but to allow building applications that give you the feeling that they can hadle what you intend to say about the world it deals with.

The trick that I'm after here, that might enable me one day to deliver an illusion that people will bother spending their time with, will have something to do with the creation of a sublanguage.
A sublanguage that allows me to keep control of everything that needs to be done to turn utterances into things happening in the world. To compute reactions. At the same time, using the sublanguage shouldn't make the user feel restricted.

Within the huge space of possibilities that language offers, I need to carve a room that's handy to work in, but also comfortable to live in.
Some rooms can made to seem larger than they are, like they do it in the movies. Sadly, I can not force the user into one and only one viewing position.



* using the name "Fred" is a reminiscence to Chris Crawford's great book "On Interactive Storytelling". He uses this name for examples a lot.
# lastedited 21 Dec 2006
08 Dec 2006

Chris Crawford was in Darmstadt on a conference dealing with Interactive Storytelling.

Hm, never heard of it.

That's bad.


He doesn't recognize anyone who contributes anything remotely interesting. So, the market of competition is still relatively empty.

That's good.

Time to dive in!

# lastedited 09 Dec 2006
04 Dec 2006
After having listened to IT conversations for about two years, I recently joined the team that produces those wonderful (and free) podcasts about science and technology.
I became a website editor, which means that I occasionally grab a show that is waiting to be published from the queue and provide all the text and image content that describes it to interested surfers. It's one of those good kinds of jobs because
  • I listen to shows anyway and now I force myself to really delve into some of the topics and speakers
  • I get to train my english writing skills
"My" first show has just been published. You'll find it here. If you are looking for interesting stuff to try out podcasts, try Moira Gunns "Tech Nation" or, my favourite, the Pop Tech conferences
# lastedited 04 Dec 2006
23 Nov 2006
In the last years, autism has gotten a lot of attention (consider the linked Wikipedia article: it has been edited 45 times in the last week). This is not just due to the movie "Rain man", but to a lot of scientific findings and new theories. For instance:
However, most of the scientific discussion boils down to two problems:
  1. Causation: Is it purely genetic? Purely environmental? Which genes might be responsible?  If we know the cause, can we cure it?
  2. Classification: What is autism? Who is autistic, and who is just "nerdy"? Are the rising numbers of diagnoses over the last decades just due to the attention the disease has gotten? Are people gradually distributed on the spectrum of autistic traits, or are "normal" people on one cluster, and "ill" people on another?
There is a lot going on, but for now, I'd like to take a look past those two questions. Not science, but sociology.

I do think that people are quite gradually spread over the autistic spectrum, but people's perception on that is different.
Let's look at the side of the spectrum containing people who generally are able to live on their own (that includes almost all of it). Look at the list of autistic traits: You'll find some entries that more or less fit someone you know or even yourself.
The general perception is that every entry on that list that applies to you pushes you further from "you're like us" to "you're different". Or from "healthy" to "ill". It's a one-way view on the matter.

Now, I know about the severity at one end of the spectrum and the problems that those people have to face. But if we're assuming (and I do so) that all people are quite evenly spread over the autistic spectrum, then I have the right to talk about other parts of it.

I myself am applying to some of the autistic traits (I even have been called "autistic" once, admit jokingly). I want to remind everyone that the brain turns an absence of one trait into a strength in another. People who don't socialize every Friday and Saturday, like to engage in geeky talk, love to dive into details from time to time or have  difficulties in verbalizing their feelings (just to name some obvious traits) are worse in one of the games, but better in another. Most of the times, they don't need help, advice or pity. Really, let it go. It bugs the hell out of them. They don't live in a different world, but just some way down the spectrum.

I feel there is a tendency to discriminate people that have a mild level of some of those traits as "unsocial". That's a problem. I think that most peoples (and also some professionals) psychological view is that "the more intimacy the better for everyone". Everyone should socialize  and engage in deep conversations about feelings at least twice a day (or something like that). Here is a story about a poor guy who only got "good" advice of that sort. He almost killed himself.
In tendency (I'm trying to be careful here), those traits are "male" traits. Just like people are starting to realize that you should allow boys to exercise physically and aggressively on the playground, they might realize that it's ok to be alone for some, or to engage in details or the like.
# lastedited 23 Nov 2006
20 Nov 2006
I think this picture tells it all - many of us feel just like that too much of the time.

There is a brilliant talk by Professor Schwartz on this topic called "Less is more".
#
11 Nov 2006
I'm certainly no expert in videogaming. But since I read around the topic a bit in the last weeks, I  am following the events as a bystander. The video game market has a lot of money to offer - and a lot of money is spent there. The big players are Sony (Playstation), Microsoft(XBox) and Nintendo(Wii).

Many discuss what the future properties of gaming will be: Games might surpass movies in the number one blockbuster entertainment category. Or: Games will be used for education.

Today, the new Playstation 3 was released in Japan. In a few days, Nintendo will release its next generation console, the Wii.
The interesting thing for me is: Will they be successful? Some people (eloquently) suggest that those big players have been building up a bubble which will crash soon. They are investing a lot of money, but the novelties in the last year haven't been extraordinary.
  • There are advances in graphic capabilities, but they have gotten slower. It's not the same Aha-moment we had 6 years ago. Why pay hundreds of dollars?
  • The advances in technical capabilities mostly include added functionalities that have nothing to do with gaming: Playing music, a web browser and so on. I don't think that those capabilities are really able to impress a lot of people that have real computers at home. In addition, those features will all be pretty proprietary. I see a lot of frustration on the horizon.
  • The most important point: There is no innovation in ideas for games. In the end, almost all of them are mere finger training. They don't capture your imagination. Microsoft hired Peter Jackson to bring stories to games, but one attempt might not be enough. Maybe the  industry needs a crash, so that bottom-up innovation can make its path (And of course this is where I see interesting opportunities for developers to get heard).
So I'll be watching the scene with interest. Sales are one thing - there will always be thousands of people that drool for the newest technical gadget (especially making the headlines by waiting in line at the stores the first day after launch).
But what is about overall sales in the long run? Compare the sales figures to the actual investments made (many say that Microsoft is already losing lots of dollars). Will players actually be frustrated?
I do hope so, because that's when it gets interesting :-)

Update Nov 13:
It seems that with the PS3, Sony will lose several billion dollars, too. I would say that it's ok to lose some money with a ground-breaking product, so you can earn it later with the market share the product brought you. But 5 billion dollars (or whatever it really will be) is not "some", not even for Sony, and as I said above, I don't see this product leading anywhere that spells "innovation of user experience". It's just a bunch of high-tech features thrown against the wall, hoping that some of them will stick with the user.
This market is indeed showing classical signs of a bubble. The critical measure will be the willingness of people to pay for these "innovations".
# lastedited 13 Nov 2006
10 Nov 2006
My first job for an own approach to interactive Storytelling was a prototype for the User Interface. Chris Crawford suggested to use an inverse parser, and I liked the idea. A parser is a program that can tell if a sentence is valid for a given grammar. An inverse parser takes a part of a sentence and tells you what would be a valid way to go from there to build a valid sentence. That is of course a nice way to help users input sentences to a program that uses a grammar. My own picture of how to deliver that idea is that I plan to deliver the "game" over a web browser.

Here is a screenshot of the first prototype, using only a very simple language with very few words. But I think you get the idea.
On the left you see the menu with which you construct what you want to say or do. On the right is just a list of things you said or did.


The parser proposes optional next words as you type. You can also avoid typing and only select from the dropdown menu by using the mouse or cursor buttons.
If a complete sentence was entered, the input box turns green and the "say" button is activated. If you enter a letter so that your sentence will never be valid, the input box turns red.


Some technical details:
I'm learning to love Python as my favorite Programming Language. In addition, I thought that Prolog would be a great language to code the grammar in. I struggled hard to get Python, running on an Apache Webserver, to use Prolog via Pylog, an interface plugin between the two. It was too hard once I tried using it over the webserver. There are several independent processes talking to each other, one of them being the Apache Webserver, and it's much trouble.
I decided to write the whole parser myself, using Python and its Generator idea - for now, it works great.
The interaction with the parser on the way is using Ajax technology to create the auto-suggest-box (thanks to brandspankingnew for inspiration). I'll need to see how well that works with a big grammar, but the communication load will never be too big (I'll introduce a limit to the amount of suggestions that are send at once).
# lastedited 13 Nov 2006
25 Oct 2006
Here is a short list of ressources to that interesting new world of Interactive Storytelling:

Wikipedia on Interactive Storytelling
The article is really short, it becomes interesting when compared to Interactive Fiction.

The Website of Chris Crawford's new company (founded this year). It's a mixed team of under 10 people. They want to commerzialize Crawford's Erasmatron engine. Part of the plan is to attract creative writers that would use their tool to create storyworlds. The actual beta version of their development tool is not very intuitive at this point (see below), but no one said it's gonna be easy (and I admit I didn't read the tutorial).



The blog of the creators of Facade and others
(Facade is one of the first serious approaches in Interactive Storytelling - well, Interactive Drama... You should try it out. It only takes 10 minutes or so. They, too, are working on a new product (but calculate with 3 years - there is much work in such a thing...)


 
#
12 Oct 2006
I am just playing around with PHP 5 for the first time (for a study project).
So I made a class out of some PHP 4 code I had and called the PHP page containing it.

Firefox said nothing. Nada. Safari at least said:

"Safari can’t open the page “http://localhost/elearning/testils/UserTask.php”. The error was: “lost network connection” (NSURLErrorDomain:-1005)"

It took me a while to find out what went wrong:
I used the static variable part wrongly.

Try calling this script:

<?php

class Test {
    public static $bla = "bla";
}

echo(Test::bla);

?>


It should fail in the same way (that may depend on the Version of PHP 5, I don't know...).
The last line has to be

echo(Test::$bla);

The first really crappy thing about PHP here is that you don't get any feedback. It took me over an hour of my time, guys! Tell me what I did wrong.

The second thing is that to print a static variable "bla" of the same class you're in, you would say (like above):

echo(self::$bla);

But if "bla" were non-static, it would have to be:

echo($this->bla);

The dollar sign switches from the class/object designator to the variable name. Why? I bet there is a reason that could be nicely explained, but I also think I am not alone when it wasn't the first thing I tried...
# lastedited 20 Oct 2006
09 Oct 2006
Peter Jackson (the producer of 'Lord Of The Rings') just appeared in a discussion Panel on an Xbox - conference.
The topic: "Interactive stories in Games".

It's all about the next generation of games change while the player interacts with them. The Gaming Industry has not been able to address this issue for years now. It's about time.

Now that this big fish in blockbuster entertainment is in the boat, I thought it's about time to announce that I'm sitting in that boat, too :-)
Since a couple of month I've been thinking what I could do in that field and had a lot ideas. The whole thing is a really complex problem, because it involves the player in a complex world, defining it less beforehand, and more while the player actually interacts with it.

So there is a lot of AI involved, and maybe Psychology is helpful - great, I study Cognitive Sciences.
And, a lot of dramatic, artistic knowledge is needed, because this will be about complex characters interfering with the player. This is not Sim City.
I'm actually trying to involve people who have exactly those artistic skills and knowledge and who I know so well that they'll eventually understand the limitations and embrace with me the technical challenge.
I read Chris Crawford's excellent introduction to the topic and I think it's time to get my hands on programming.
I think there are big names in that pond, but it will eventually turn out to be a very big pond, so I'll take a chance...
I'll keep posting about things that are interesting around my efforts...
For the moment, I'll leave you with two links to the famous people I noticed:

Chris Crawford's new company
Peter Jackson on the XBOX conference
# lastedited 09 Oct 2006
04 Oct 2006
When I visited Gambia (that's on the western coast of Africa), I thought about the Digital Divide and what could be done to help people there to catch up on getting in touch with computers, a bit at least.
I realized that that would achieve the best results when you target children, as they are eager to learn and have no fear of  things that are completely new.
My first idea was to offer godparenthoods, meaning you would guarantee daily or weekly internet access for one or more children in Africa for a small amount opf money (the concept is borrowed from the godparenthood programmes where you ensure education in a school or food. However, this concept would target another kind of donours, namely geeks).

My thougths about that were mostly concerned with money, organizational problems, infrastructure and so on.

I rememered those thoughts when I read this article, concerned with a closer look at what the children would actually do on the internet. Or, how they would approach the computer in general.

It deals with the idea of an indian computer scientist. He put a computer in the wall next to a street where slum kids played. Then, without any explanations or guidelines he just watched what happened as the children discovered that strange device. They teached themselves really stunning things on the computer and explored the web within a few weeks. You should really read the article/interview. It's not that long, either.

I only have two concerns:
  1. One thing that I also thought about back in Africa is what happens when you open these children to email or instant messaging. There are certainly bad experiences to be made out there. Children in western countries are endangered by pedophiles or other threads, too, but they are better informed as they grow up with the technology and, most important, they are rich and thus far less easy to be tempted.
  2. Mitra (the indian scientist) compares those self-educated children to indian cable technicians. Most of those people can do a good job without any deeper understanding of the system they're dealing with. They just remember sequences of actions that lead to successs. Maybe that's a necessary intermediate step to a knowledge society. But maybe it's inherent to indian culture. The latter option would be a really bad sign for India...
# lastedited 05 Oct 2006
03 Oct 2006
I just solved a problem that I gave up upon once - I tried again and succeeded at last. Such things feel great :-)
So maybe the combination of words I use here will help others to google the solution quicker.

When you set up an Apache Webserver, you might want to develop your files somewhere else, say, somewhere in your user directory (Apache normally resides in some system directory and it's not convenient to go there for development).

You'll soon find out that you need to make two changes to httpd.conf:
Change DocumentRoot to your home directory and change the corresponding <Directory "...path..."> entry to the same path. Then restart Apache.

Now when you browse your home directory, you might get an error, telling you that you don't have the permission to see anything there.

I spent a long time reading forum posts in the internet and fiddling around with the permissions of that DocumentRoot folder I wnated to use and its contents.
I used chmod to change the permission settings - no success.
I used chown to change the user and group who own the folder and its contents to 'www' (the default within httpd.conf) - no success. (By the way, always use the -R option, so you'll change all the content's ownership in that folder recursively)

Then I read this post that dealed with something different, but reading to the bottom was worthwile:
The whole path to your DocumentRoot folder must be accessible by the Group (and User) specified in httpd.conf. (I guess this holds when DocumentRoot is not within the Apache home directory)
So I changed the Group and User in httpd.conf to 'nic' (which is me on my local machine and I own the whole path to that directory), resetted all the ownerships on my development folder to 'nic' and it worked!

Note: Of course no one would want to do this in a production environment! This is just for testing on your local machine.
# lastedited 03 Oct 2006
25 Sep 2006
I always suspected that nerds are cool. Here is the video proving it:


(and another version, pictures  from someone else, but with lyrics to it).

But dont't panic: I made the test to the song, and ...
# lastedited 25 Sep 2006
22 Sep 2006
I am currently studying at UCD, Dublin, Ireland.
By every international student, a lot of time is spent organizing things: Courses, Accomodation and the like. That's tradition.
Also, everyone who needs the internet needs to organize his access to it from the university's network, too.

Here at UCD they use a Proxy server (a computer that channels all internet traffic from registered users). They have a IT staff that help all students configuring their browser (IE mostly :-( ) so that they can surf.
For me, that's not enough. I have a whole bunch of other applications than browsing that I need to work with internet. There is Skype for internet telephony. There is file transfer with the webserver that I rent for hosting this (and other) homepages. And there is Subversion, a code control system, that helps to organize several peopls's work on the same program files.

I really need those. And the IT guys are really not that helpful, especially for people with Mac computers. The problem is that you never know what the Proxy server allows and what it just won't let through unless you've configured it yourself. They could really give out info about that, but maybe there are just not many people like me here...

I still have problems, especially with subversion, which works with some servers, but not with others...
Anyway, here are some good hints for all people that also need to get along with their proxies:

  • For Mac users: use Authoxy. It's a tool that will intercept all requests and forward them to the Proxy. I got Skype working that way.
  • For FTP: There might be just no way you can get a decent FTP program to work properly behind some proxies. I installed lftp and got it working somehow, but it wouldn't let me delete. What the... ? So I remembered a hint someone gave me years ago and surfed to www2ftp.de. These guys let you do File Transfer over HTTP with the browser. And it works great. You can even upload a whole Archive with stuff and the tool will put every file at the right place!
  • For subversion, the place to specify a proxy is ˜/.subversion/servers - but I do have my issues with it, still.
  • And again sthg for Mac-Users: Darwinports (slow, but worthwile) will need to know the Proxy, too. Set the system variables "http_proxy" and "FTP_PROXY". For example, you could add these lines to your .profile - file (replacing the UCD proxy with yours):
        export http_proxy=http://proxy.ucd.ie:8484
        export FTP_PROXY=http://proxy.ucd.ie:8484

      
You will have to restart your Terminal session afterwards, of course.
# lastedited 03 Oct 2006
19 Sep 2006
This article from The Atlantic is a very well-written expose exploring the question where religion comes from (a doctrinated opium for the masses like Marx said?) and where it is right now (about to vanish facing the success of science?).
Both of those questions are answered by the thesis that religious/supernatural thinking lies in the genes. It has always been there from the start and it always will be. By ascribing intentions to some "things" of the world (like people) but not to others (like chairs) people naturally act as dualists. In fact, people are born as dualists (dualists separate soul and body) - they treat the world in that way measurably already as infants.

The article mentions writers like Steven Pinker (a Cognitiver Scientist/Linguist) and Richard Dawkins (a Zoologist, inventor of the term "meme"), both of which I already read books of.  They are both very worth reading and are not too hard to understand.

This topic is actually one where some disciplines of Cognitive Science meet. In this case, there are Philosophy (of the mind) and Psychology.

I once had the idea for an interesting experimental setup: I would compare the brain activity of people while they pray to children talking to their imaginary friend. I think both of these behaviors evolved to deal with the ongoing dialog in your own head - so you have someone "to think to". I would expect these activities to share the location and patters, of course. Nobel Prize guaranteed.
Or what do you think?
# lastedited 19 Sep 2006
16 Aug 2006
For her last birthday, I buyed an old G3 iMac for my girlfriend. I planned to upgrade it with RAM and a bigger harddrive.

Long story short, I did it, but it was a hell lot of work and unexpected problems. And maybe I even would have messed something up badly, if I didn't find this excellent website. Some guy explains every little step in upgrading an old iMac in detail. And just for the model I bought on eBay. There are a lot of models out there, but his page only deals with the one I bought! With pictures of each step even!

How crazy is that guy?

Well, it seems that a lot of people are that crazy. Nobody predicted the bottom-up content revolution of the internet before it was born. There seems to be altruism everywhere and all it took to unleash it was to give people tools to do it: HTML and a network.

But there is another interesting, related phenomena that these days few people, at least few non-it-technical people, know about: People also build software and share it for free. Nothing. Nada.
It's called Open Source. For instance, this pages are hosted on a server that runs on Apache, which is a great, complex software, and totally free. It generates income for my Webhoster, so there are a lot of interesting economic implications there, which I haven't found the time to think over really.
My point is, there has been more and more free software made available in the last few years. Heck, even this PolyPager tool I wrote to make my webpages is free for everybody, and it uses free tools itself.
I realized this once more in the project I am currently involved in at the University of Osnabrück. We're a small team of students with no budget creating a quite powerful software for scientific experiments. We're using so many great software tools that I can't even list all the great stuff these tools are doing for us.
Here is a short list of the tools themselves (in no specific order - almost all of those have been developed in the spare time of thousands of smart people, the others are developed by companies for public use):
I think that most people haven't understood how great this is: These tools are really powerful. You can start a whole company with tools that cost you nothing. All you need is your head (and a computer, and internet and so on, admitted, but you get the point). This has not been true 15 years ago. It has been true 10 years ago, but almost exclusively for web applications. Now there is free software for almost anything. For a quickly-googled example: I am no expert in Computer-Aided Design (CAD), but I know it's an engineering method used in industrial development and it requires complex and expensive software. Well, here is a software that does it for free (notice the pictures that show what you might design with that software).

Now, you might argue over the motives that lead to so much sharing of worth. You might say that much of it isn't altruistic at all, but showing off skills or trying to lead the pack by defining the standard software. Agreed. There are many motives.
But I think there are a lot implications that we're not thinking about enough. What does this mean for progress? For ecomomic growth? For globalization? (For instance: Do we share to much knowledge with, for example, China?) For culture? (What does a copyright imply? What licence models should we have?) For patents?
Etcetera.

There are some people debating all this, but considering the implications Open Source has on the economy (already and in 10 years), there is far too less debate.
# lastedited 01 Sep 2006
12 Jul 2006
...you might end like this (video)
#
10 Jul 2006

At least, Jaron Lanvier calls it "art". Cephalopods are a class of mollusks containing octopus, squid, cuttlefish and nautilus. Their means of communication is so different to ours that Lanvier used it as an example when he was interviewed about what the future could bring for mankind. Maybe communication will be enhanced, he thinks, pointing to cephalopods.

Take a look at this page and see them in action (there are two movies on it, refresh the page to see them again). You'll see that cephalopods can show amazing stuff on their skin: moving patterns or, chameleon-like, the colours of the surroundings. Researchers know that the patterns are used for communicating with mates or prey. It's an interesting way to show what you're up to and the researchers even like to compare it to humans:

"Although there is no evidence that cephalopods communicate symbolically like we do, there are other forms of communication that we have in common. For example, cuttlefishes gesture with their bodies by striking specific postures that signal their intentions. Humans also gesture and assume postures both to signal intentions and to reinforce what they are saying. Cephalopods use gestures or postures to reinforce body patterns in a display. Humans blush when they are embarrassed, ashamed, or sometimes just anxious. The mechanism of this response is different, physiologically speaking, than the body color changes of a cephalopod, but they still communicate an internal state of being."

Cephalopods seem to us like weirdos from another planet. But still, they live on the same planet as we do. While we're being naughty in envisioning the future, why not imagine us using displays to communicate feelings? Blushing is so old-fashioned... :-)


# lastedited 10 Jul 2006
29 Jun 2006
Within the last year, I've become quite familiar with Linux (Ubuntu, Suse) and Mac OSX. That means: I already knew Windows, of course, and "familiar" means I can set them up for myself and configure everything just like I want.
Because of this administrator knowledge, I also had to set up some PCs for friends and family members during the last few years ("had to" means: no one else could be found who could help - other geeks will know that dilemma). Of course those machines all run with Windows XP because that's standard for normal PC users.
As I gained experiences with all those operating systems and observed those Windows-users who, basically, rely on me to keep their machines running as expected, I noticed that there indeed may be a turning point in sight: I consider not giving those people Windows the next time! Why?
  1. The Operating Systems get more and more comfortable. Mac OSX is really nice and top-rated by most of its users. Linux distributions like Suse and Ubuntu are really close to be usable for a broad userbase (in fact, I came to Linux last year and getting it installed and configured is not such a geek-honour like it was 5 years ago).
  2. More applications are available on all platforms. There are projects like Firefox, Thunderbird or OpenOffice - which is really usable for people with normal demands. And there are more and more applications that are available online.
  3. 1) and 2) are product of the hard work of many people over the last years. Those things were described years ago, but I feel that they are just about becoming really close to reality. Also, 1) and 2) are more important for the administrators, those people like me who set up systems for friends and family. The user side argument is: There are a lot of users who basically do not install anything special. They just surf the internet and write emails and documents and that's it. Those users, mostly people over 40, are a lot of people, a huge part of "my" machines are operated by those. They expect that "the machine just works". If there is something broken or not working as expected, they'll be calling me anyway. Then, why the hell shouldn't I install on their machine what I know to handle best? That would be reasonable. All they need to know is where to find which button. Also, with Linux, I could avoid the legal/illegal Windows copy - question (which is getting more important in a few month as it seems)
If already I, not too professionally experienced with administering operating systems, think about this - that turning point seems to be in sight.
# lastedited 30 Jun 2006
28 Jun 2006
... or rather: the use of fMRI in experiments might lead to overrating of its findings.
I know it's not the best blogging style to relink pages that just appeared somewhere else. But this is just great: here I wrote another neurobiology exam last monday, including questions on brain scanning methods (PRT,FMRI,EEG,MEG) and what they can be used for. And now I read this:
It's good to know the limitations of all those methods, but maybe that is not enough, as an experiment mentioned in this article suggests: Even experts could be persuaded to give a better rating for bad explanations of experiment setups when some nonsense-neurorological fMRI-buzzwords were given. "For both the novices and the experts (cognitive neuroscientists in the Yale psychology department), the presence of a bit of apparently-hard science turned bad explanations into satisfactory ones."
So it's really worthwile to learn this lesson by heart. Why is fMRI so seducing?

The authors say that we're natural dualists, still surprised that thinking can actually be watched. Still, there's not much more to it than just watching. Understanding is something else. FMRI is a functional method, giving you information on where increased activity is happening. That's how the neurological buzzwords where used: "These [buzzwords] were entirely irrelevant, basically stating that the phenomenon occurred in a certain part of the brain." The time resolution of fMRI is actually really bad, so there is a restricted range of experiments that can really use fMRI methods to give surprising new insights.

"We know far more about the mind from the study of, say, reaction times than we do from fMRI studies."  This is a good example because reaction times allow you to say how much computation action A needs in contrast to action B. There have been many interesting findings based on that simple experiment setup, but what do you have to illustrate that when you publish them? A graph. Bohooooring. FMRI gives neat colored pictures. That's one of the reasons of its popularity.
# lastedited 29 Jun 2006
19 Jun 2006
I am currently working as a tutor in the course "Introduction to Artificial Intelligence". That sounds great, but half of the work is really only correcting Prolog-homeworks and help guarding the exams.
The other half, however, is holding a tutorial once a week, showing slides and all that. This is done so that everyone who didn't understand the professor or doesn't know how to start with the homework can get advice from a student that roughly knows why understanding this might be hard.
Besides teaching me to speak english in front of people, this helps me learn to create content for people. Over the semester, I learned that I get more attention if I look carefully what those students need right now. That's actually a good lesson nowadays because in the web attention is measured by traffic and/or clicks and that measurability makes it easy to translate attention into success.

Ok, now how do I measure that attention?
I picked a bad date for my tutorials: friday afternoon. Of course most of the students will be home or already in their weekend, so there are just a handful people showing up each friday. But thanks to our university's online information system, tutors (as well as professors of course) can upload their presentation slides where students can look at them whenever and whereever they want. The system also counts downloads, too. As this screenshot of the download area (click it) shows, I can see that this audience is much more attentive than the physical audience on friday afternoon is:



I can also see where I tried to be sharp and give the students too much of my precious skills. With the third session, I was down to a little more than 30 downloads. After that, I talked about topics that were less advanced and more homework-centered. You don't see that too much in the title, but in the content (some of them are available here in the takeaway-section).
# lastedited 26 Sep 2006
08 Jun 2006
Well, I know how to see the error page. Search for osx "operating system". I tried it three times and always get this error page:


Other terms with the same structure (for example) bla "bla bla" work. Nice bug, indeed...

Update that same afternoon: The error doesn't appear no more. Would be interesting to know what happened there... At least I now know that they use Perl. Oh god, I'm such a geek!
# lastedited 08 Jun 2006
06 Jun 2006
Stranger, if you strived this dry internet for help with the Saxon XSL Processor who ruins every umlaut no matter what encoding you use, just like I once did, your winding googling road has come to an end:
Use Xalan.
#
29 May 2006
since this morning I am convinced that my bookmarks are gone. I will have to use a backup from January 2005. This is very sad. Everybody who is collecting these little treasures of journeys through the web will understand...
How could that happen? I reconstructed the whole sad story like this: Normally, my Linux and Windows installations share the bookmark file for Firefox. As I recently got my G4 iBook, I tried out the BookmarkSynchronizer extension for Firefox to store my bookmarks on a server, so that all my Firefoxes just get their bookmark file there. I made Firefox up- and download that file automatically on shutdown and startup, respectively. I haven't done that on my iBook yet, I had no time. Sadely, as it turns out! But read on...
The next thing that leads to the loss of an important part of my external memory is this: I meddled around with my /etc/fstab file. That file holds the configuration for partitions on the harddrive under unix. In fact, it specifies how they are mounted to the file system. While I rescued some data from my girlfriend's old harddrive, I also tried out something with the partition that holds all my data that Linux and Windows share.
Long story short, I made a mistake. That partition was not mounted anymore (but I didn't notice). The next time Firefox shut down, it uploaded my bookmark file automatically to the server. Since it didn't find one, it uploaded an empty file. And then... well, it must have happened like this: Then I noticed I had crushed my /etc/fstab and corrected it. The next time I started Firefox, it automatically downloaded that empty file and replaced the local one with it. From that point on, all my bookmarks had gone. It's a cruel world.

Update 28.06.2006: ohmygod, I found a copy of the bookmark file that I uploaded on my desktop. It's called "xbel.xml". I must have downloaded it to see how that XML looked like. Thanks to heaven I was so curious. Now my bookmarks are back :-)
# lastedited 28 Jun 2006
29 May 2006
aharef.info has an applet to make website structure viewable. The pictures resemble organic structures in a beautiful way, and good, modern structural design becomes clear at a glance. Here is my page:

# lastedited 08 Jun 2006
22 Apr 2006
For my new Prolog tutorial job, I need a laptop, to show slides on the bamer.
Until I have my own, I lended my father's, with no LAN connection...
Because Windows ME had troubles, of course, I decided to just install the latest Ubuntu distro and see how that works.
Of course, I wouldn't be able to upgrade anything via apt-get, because I wouldn't be able to connect to the internet. But I thought that Ubuntu comes well-packed, with all the basic stuff you need.
Well, when I wanted to install Prolog on it (last-minute style in the mensa, of course), it told me that it lacked any C-Compiler. What the...? A Linux without a C-Compiler? It's true: http://distrowatch.com/weekly.php?issue=20051219#2
Yes, the world hates you when you don't have internet...
#
13 Apr 2006
Heard about CAPTCHAs lately? They are a kind of Turing Test (it is short for Completely Automated Public Turing-Test to Tell Computers and Humans Apart).
By many pages in the web that have a free account, spammers try to spread their "message". So the pages will try to tell apart the humans from the spambots (Of course that is not the whole Turing-Test that is based on a real communications - this difference reflects nicely what has changed since Good Old Fashioned AI).
Most of the actual CAPTCHAs are pictures, so you are asked to type in the letters that are shown. Many of these CAPTCHAs are not save anymore, as the arms race proceeds. Computers can read images, too.
On the official CAPTCHA-site (by Carnegy Mellon) you will find more examples: audio tests, pictures with object recognition and simple logical tasks.
I didn't like all of them. For example, I failed here. But the real problem CAPTCHAs have are not badly designed tasks. It's the arms race. It is ruled by this one simple principle: If you let computers generate the task, a computer might be able to solve it and eventually some hacker will come up with a program which actually does. And some go even further: They delegate the task to humans (and pay them with access to porn, so the deal is give one access to get another access - seems fair :-)).
I've been thinking about this lately and came up with some things that might be needed for successful (that means: not breakable) CAPTCHAs:
  1. you need to chose a domain where computers are not even close to reaching human performance. Object recognition is worth a try, but it is already a domain where computers start to reach first successes. Why not try reading people's intentions? Present situations with several participants/objects involved and let the user guess what person A is about to do. This task -well designed- can be solved by first graders and not in 10 years by a computer. reading intentions is out of reach.
  2. the fact that you can theoretically decode computationally everyhing that was encoded computationally needs to be taken care of. The ingredients of the task need to be combined at runtime. That is a combinatorial issue and cryptography has thought this through: A 128 bit key is theoretically computable, but practically this takes a long time. A situation that I described in point 1 is a good model for that...
  3. There remains the problem that spammers might use humans to solve those tasks: they delegate the actual CAPTCHA-task to some human that gets paid a little money (i don't know if that actually is cost-effective for them) or incidentally wants to see some delicate content that the spammer provides in that moment. What happens here is that "our" machine asks "their" machine to do the task, and "their" machine delegates the task to some human, just like a proxy server. So we need to make that delegation difficult. Maybe that would need some anti-semantic web approach: Data that is well-formed, but definitely not machine readable or delegatable. That is: the bot might transport a bunch of data to the human, but not the sense of the task.
After all, I think that this is a field where we can try out what modern AI is about and so something practical at the same time. It's interesting, though, that we are not working on the machine that wants to pass the Turing Test, but the spammers are. We are just talking of defining the game (I still prefer this side).

A last point: It should not be forgotten that there are a lot of people out there that cannot judge pictures because they cannot see them (they are blind). Some people raised this issue, especially the W3C, but I haven't heard a convincing way out of this dilemma.

[Update 17.07.2006] This is also a nice idea. Select the three "hot" people out of a bucnh of pics from the dating service "hot or not" to proof you're human...
# lastedited 17 Jul 2006
26 Mar 2006
This entry could consist of just the title: I recommend XAMPP.
I cite the authors: XAMPP is an easy to install Apache distribution containing MySQL, PHP and Perl. XAMPP is really very easy to install and to use - just download, extract and start.
I used it on Windows and under Linux - and XAMPP seems to be one of those few software downloads that work just as easy on both platforms. Hurray!
It was even simpler on linux - on windows I had to convince skype to use another port because the apache server used the same... but that still was easy. Thanks to Simon for the hint and -doh!- for me not trying this out way earlier...
# lastedited 26 Mar 2006
29 Jan 2006
This a nice thought. Are we one day telling our kids that we, back in the old days, could just lay away our mobile and walk around undetectable - invisible?
# lastedited 26 Mar 2006
22 Jan 2006
I just got introduced in (yet another) scientific buzz word: Sozionik. That's a mixture of Soziologie (german, meaning social sciences) and Informatik (germ. for information technologies).
The two speakers were Prof. Dr. Uwe Schimank and Dr. Thomas Kron from the Fernuniversität Hagen (the whole series of talks is organized by our local system scientists). The title of their talk was "Soziologische Akteurmodelle und Agentenmodellierung" (social models and modeling of agents).
Their claim is that to achieve rich multi agent models, you need to consider models of the human mind that have been discussed in the last decades by the social sciences.  Here are the four base models they used:
  • homo sociologicus (acts to fulfill the norms)
  • homo oeconomicus (acts to maximize his profits - whatever they may be. this definition of what we can call a profit is a theoretical problem - some people want to explain suicide with this model)
  • emotional man (whatever emotions rule you, you will follow them)
  • identity insisters (you have a strong picture of what you are? -Then you will try to act according to this view of yourself)
Those models were now used for the whole population of agents. They built a situation to examine the "bystander-dilemma" (several people observe a crime but few or none help). The interesting cases were those where they used a mixed population (like 30% emotional man, 70% homo sociologicus).

I imagine that social scientists and psychologists soon
could fight for dominance in modeling such multi-agent systems. Since you have constraints on your complexitiy, psychologists will have to simplify, which is not their strength.

After all, this approach is a step towards multi-agent systems that are better based on theoretical grounds compared to layman approaches. And it is a step up on the scale of the complexity of single agents. There are to dials on which you can crank up complexity: the sheer number of agents and the complexity of single agents. Those are both distinct approaches, and the latter has not been approached much (as far as I know)...
# lastedited 22 Jan 2006
19 Jan 2006
I once tried to summarize Jamais Cascios excellent future vision of the Participatory Panopticon.
Today I listened to his talk he gave on Accelerating Change 2005. The theory is almost the same, with some new technical examples (for example taking pictures of what you eat with your handy and let a japanese company compute your diet on the fly). New is this one idea:
He brought up the question if these memory assistances will be built into ourselves or rather not. And his answer his "no", illustrated with this simple point:
A brain surgery is something you don't want to have every few years, definitely not. But, for sure, the technical gadgets of 2015 will supersede those of 2010. And 2020 those will be old cake again. No one would want to go to brain surgery every few years. Sounds interesting.
Maybe there are non-invasive techniques on the way, but those really will need some years, say 20, to kick in.
# lastedited 19 Jan 2006
27 Dec 2005
As I am still wondering what I could do as a bachelor thesis, I'll go ahead and try to formalize any idea coming to my head.
A friend of mine is a writer. He is working creatively with ideas. He collects those ideas in a little notebook. Now, he asked me once if I could think of a good and easy way to
  1. collect all those ideas and
  2. make it possible to assert connections between them in an easy way
Point 1 is no problem: make up a webpage or some MS Access application or whatever. Point 2 is where it gets interesting: He would like to go over his ideas and link two of them together, say some interesting character he once thought of, like a street musician with an ape, and maybe a scene he thought of on another day, like a car chase in Moscow (no wait, James Bond already did that with tanks...). Anyway, maybe he comes up why that person would be really cool together with that scene. He wants to link them quickly and maybe even write a sentence why he did what he did.

That's an interesting problem that not only creative people have. Think of police officers going over a lot of details of a case. Links between details are essential. The problem today is not to see that, but to come up with a human-friendly implementation that comes close to how humans like to associate. If they get interrupted in their associating process because the system is lame or flawed somehow else, they might lose that precious train of thoughts.

Now, my current idea would be this: Make a firefox extension out of it. You get all the browsing stuff for free that made up all the information growth of the last years. Loads of content, tons of users, platform independence and so on... A lot of the information is already structured, too - more and more people are going for XHTML now (and who wants to use associating might be convincable if he is not already doing XHTML).
Let's just end up with my short train of thoughts:
If I take into account that a lot of interesting things have been implemented already, there might be not too much work left. I would use firefox extensions like:
  • Content Holder - display content in another pane so you can compare two content areas (I also tried out Split Screen, but it doesn't seem to work on FF 1.5 yet)
  • Mouse Gestures - draw lines on the screen with the mouse. That'd be used to connect chunks of content to each other.
With a lot of technical things already there and loved by thousands of users, we'd be left with some work, but with more time for interesting questions like
  • What are the semantic structures the user would need? And how would the XHTML chunks be addressed?
  • How can the user view and reuse his associations?
  • How should we model the processes that a human makes when associating concepts?
# lastedited 27 Dec 2005
24 Dec 2005
# lastedited 29 Mar 2006
21 Nov 2005

I just installed the iBrowser plugin for TinyMCE to make image upload possible (it is not by default included in TinyMCE). This is a pretty good description.

Besides chmoding the ibrowser directories to 777 (not 775) I had to make two extra adjustments:

  • I switched the name of editor_plugin.js to editor_plugin_src.js because otherwise, there would no ibrowser-button be seen in TinyMCE
  • I still couldn't upload pics because the "Browse..."-button that lets you select a file on your computer was hiding the button that lets you say: "Yes, please upload!" I changed the size of the input field in ibrowser.php, made it 10 chars shorter. Maybe that's a general ibrowser-problem with Firefox on Unix (Opera worked well).
# lastedited 18 Dec 2005
04 Nov 2005

Since 2 weeks I always had problems installing software on my beloved Ubuntu Linux.
There was always some 404 Page Not Found Error on some Server, even with "apt-get update".
Honestly, I didn't have the time to go after that, because the semester is really rolling these days. And I'm not a Linux genius, you know. I had no iea where to start looking.

Today, after googling a little, I remembered editing the file /etc/apt/sources.list right after installing Ubuntu. I inserted some backport then, as it is (still!) proposed on the Ubuntu Forum . You specify that some "backport"-Server will be
"http://ubuntu-backports.mirrormax.net/":
In detail, you ought to write this:
"deb http://ubuntu-backports.mirrormax.net/ hoary-backports main universe multiverse restricted"

However, on the Ubuntu backport project page it says:

"NOTE: This hoary-backports line is obsolete. Please don't use it:
deb http://ubuntu-backports.mirrormax.net/ ...
"
Now I thought I found it. I just edited /etc/apt/sources.list, replacing the URL with
one of the other mirrors that are proposed there.
They all failed, too.

I love remembering those seconds where you are so close to giving up. I browsed to one of those mirror servers to look if there is anything at all. I already had noticed that "http://ubuntu-backports.mirrormax.net/" contains no files for the distribution "hoary" which happens to be mine.
But "http://acm.cs.umn.edu/ubp/" has everything there. It's just in a directory with a slightly different label. Instead of "dists/hoary-backports/" it is "dists/hoary-backports-staging/". Seeing that, I tried using that in /etc/apt/sources.list:

"deb http://acm.cs.umn.edu/ubp/ hoary-backports-staging main universe multiverse restricted"

and voila - it worked. "apt-get update" finally yielded no errors and Synaptic now installed what I was looking for.

With so much documentation that Ubuntu has build up, it is no big surprise that it is not easy to be updated at all places (though I think this backport server list is a little flawed - you need to know the directory structure on the server...).

# lastedited 16 Jan 2006
29 Oct 2005

I'm working on some Cognitive Psychology papers on text comprehension these days.

In the experiments those papers describe, they always use their undergraduate students as participants. I have to participate in those kinds of experiments myself to get the credits.

Now, I am not a Psychology student. But I know that to be allowed at studying Psychology, you should be really smart. At least that is the case here in Germany.

When all those experiments are conducted with some kind of elite brainies, can they really be transfered to all kind of people? Im not questioning their relevance, but if the data is really giving you a profile of the mean brain?

For example, when they say: "Calvo and Castillo (1996, 1998, 2001) and Calvo et al. (1999) observed facilitation in naming the inferential target word under 1,250- and 1,500-ms SOA conditions, but not when 500- or 1,000-ms SOAs were used."

This means that the partipant's brains did  react when the stimulus and the measurement were 1,250 - 1.500 milliseconds apart. They did not react significantly when they were only 500 or 1,000 milliseconds apart.  Now we know how fast smart young people react, when they are asked to name a target word. But:

Is this still true when you stop testing only the nerds? When you test  people that are not very bright, it might be that you get  results that don't have anything in common with the results above.

Maybe there is a study on that question, but how would the question be? "Are smart people accurate subjects for psychological studies?" No one can define what "smart" means. We all know where the problem is, but science needs more specific grounds.


I would suggest to differentiate people on their memory ability because that is what came to my mind first when I thought about Psychology students - they know how to learn :-)

# lastedited 03 Jan 2006
20 Oct 2005

I just noticed that the Java Script Code I showed in this 2cents was not very well readable. I just colored it in green. Everybody saw it was Source Code, but nobody would have fun to read it.

 

I use Jedit as my favorite code editor and I remembered it has this cool plugin called code2html for turning Code into nicely highlighted HTML. Unfortunately, the plugin manager returned some awkward ftp error again (is Jedit not maintained anymore? They even have a MySQL access error on their index page!).

So I googled "code2html" and found this online CGI Script. It works really well and the output looks nice, but sadly, its output is not XHTML. The font element is used with its "color" attribute:< font color="4444FF">

Shortly before giving up I thought: "Why not copy and paste it all into my TinyMCE Editor I have for my Webpages? Let's see what it does with it."

And voila: <font style="color: rgb(68, 68, 255)">. Here the styling is CSS (of course, the "font" tab has to be configurated as valid_element in tinymce).

 

 

 

A nice example how good a software TinyMCE seems to be.

# lastedited 20 Apr 2008
18 Oct 2005

Today I wanted to make some progress on my private Homepage Framework.

Sadly, I spent most of the afternoon surfing. You know how that works... I finally tried out TadaLists after I read about it the tenth time and then also Backpack . I read blogentry after blogentry about it for no reason than curiosity... I think that besides the points they make with simplicity and the smile on your face when all the JavaScript stuff works so elegantly, some of the features are really cool (like email a list entry or note or pictures).

But not all is heaven. I tried out the SMS Reminder Service (they even have O2 Germany, my Provider, listed there). It didn't work. I can program an alarm call on my handy with the same effort anyway... And why all the hype about this easy text format for their wiki? It's just another script language and you will have to learn it. Why not use a cool WYSIWYG - Editor ?

 

So then I finally read about Ruby On Rails. I even watched the presentation movie. Maybe I got around that so long because I feared it might be so good I'd stop working on my own project. And it is pretty cool. Ruby on Rails makes all that maintaining boring db access code (be it editing or retrieving) a snap. Imperior to my stuff (at least in today's version) is that you can arrange it all just the way you want. You can take gifts, and then you change the gifted code where you want it to look your way. It seems that, if you don't change too critical things, just the presentational pages (*.rhtml), the gifted code will still scale smoothly with the database. No Mapping! You change the database, your webpage changes. I can understand why David, the main author, is angry about J2EE guys claiming the whole ground of good frameworks.

So that made me interested, I even found a nice cheap Webhoster that offers Ruby but I also asked myself desperately if I should really go on coding or just switch to Ruby On Rails - for time is money :-). I decided that, yeah, I should go on:

  • after all, I'll know what it means to make up a Web Framework on your own. Somebody who just knows Ruby On Rails knows Shit. Okay, (s)he might be faster and better than me but (s)he doesn't know what (s)he's doing there. Who cannot value the gift is gifted, but poor. And then, there is the law of leaky abstractions. Every abstraction leaks, somewhere, inevitable. You cannot know everything down to the bytes, but I would rather hire someone who knows what concepts are mapped beneath his level of abstraction than someone who thinks (s)he is walking on concrete while (s)he is really walking on nothing but sunshine (sounds like a reversed LSD trip...).
  • I would still need to learn Ruby. And then Ruby On Rails. Honestly, I'm making this project up to get some requested Webpages up quickly (this one included). And I want to know what I'm doing, roughly. In short term thinking, which is not aaalways avoidable, I cannot ride a totally new horse now.
  • My Framework makes some more gifts than you get just using Ruby On Rails. There is menues, grouping, RSS, some Javascript things, I include and configure a WYSIWYG Editor as well as an image gallery, Image Rotating and so on. Now that is because I do something that is not so flexible as Ruby On Rails is (way not), but rather a cocktail of cool things about web applications. Of course, I could rewrite my PolyPager so that it uses Ruby On Rails. Presumably I could save half of my code and get live-db-updating for free. I do hope I someday find the time for that, but a good deal of the time I already out into this was for decisions about the things I mentioned above.
I added the new book about Ruby On Rails to my wishlist anyway :-)
# lastedited 19 Jan 2006
30 Sep 2005
I had another thought about testing: What about recording use cases of an application via a little program that watches your steps?
What does it do?
I could imagine such a program for web programs. It could very well be a Firefox Extension. It has two modes: Do nothing and Listen. The first would be the default, but when you are testing your web application or someone else does, turn the mode to "Listen".
Now the program records what you do. Type some text in form field X, click a button Y or hit a Link Z. Just as in recording Macros for Word processing programs, you could easily say where your use case starts and when it stops.
This way you could get a sequence of defined actions with no effort. Those action sequence represents a reproducable use case, just what you are looking for when you are heavily testing.
How easy could it get?
In fact, a way to save even more effort would be that the program always listens, and when you think: "Hey, I just had a case worth recording", you hit a button and the program makes a use case out of your last actions. This would be great for testers who test a zillion things and when something goes wrong, they could just produce a reproducable use case with one mouse click.
How would it work?
First the easy things: identifying the active elements of an application via mouse events is no big deal. Also not too hard would be making up a user interface where you can adjust the use case with no big trouble.
Now the difficult ones: For a use case that needs to be reproduced, you would need to know things about the environment on the computer that it was reported on. There may be a lot of work defining what would go into there and what not.
# lastedited 10 Oct 2005
28 Sep 2005
I listened to Kim Polese's talk at OSCON 2005.
Speaking for SpikeSource, her company, she made the point that in the future, keeping software systems healthy will not just be a big issue, but a strategic one.
Using Open Source Software has become a critical strategy for companies to achieve lower IT costs. But all the different modules have to work together. Plus, they upgrade their versions in very different cycles. Administration and, even more important, testing of these babel-like software hunks will be a big task and if it doesn't become managable, progress in this area will come to a halt.
So far Kim Polese. From my Cognitive Science standpoint, I immediately thought: "This testing - that's a task for... Agents."
Instead of writing code with direct calls to the software like JUnit does, we should abstract these calls somehow from a layer that just describes use cases. Each use case becomes an agent who will act just like the human that described that use case (users, tech support,...).
And each night, hundreds or thousands of little agents test the software for the company.
There is an AI thing to it (the agents: their interface, their messages and so on) but also a plain IT thing (the abstraction of calls to the software babel). And the latter might be the most work. I get the feeling that a lot of today's problems are structured like that.
# lastedited 28 Sep 2005
26 Sep 2005

While researching the web for Generators that do what I try to, I found CodeCharge (among others). It promises to be a really mature Generator for Web applications.
All the more interesting, it works just like mine: The input is an XML File, and a lot of XSL: "over the years the complexity of our XSL has grown tremendously".
A lot of the way they did it reminds me of my last project at RoeperWeise. where I programmed Version 1.0 of a VB6-Generator. The tying of developer-defined actions to events, for example. Interesting.
But Codecharge might not be perfect. Try that URl in Opera: All the main links are gone...
Good thing to know nobody is perfect. I know how complex this Generator thing can be when you keep adding features (and they have a lot!).

So get it on!

# lastedited 09 Oct 2005
22 Sep 2005
When I tried out Ubuntu a few weeks ago, I stumbled over the option to order the original Ubuntu CDs.
I am now happily running Ubuntu since then. And today, the postman rang. I didn't think about the CDs anymore, so I sure was surprised to find this :
I ordered so much, because they said on their website that they don't care how much I'd take. I was near the minimum they had suggested :-) Red is for Intel x86, yellow for the new AMD64/EMD64T processors and orange for Power PCs (Mac).
Each CD has an install and a live CD. Good work!
# lastedited 23 Sep 2005
07 Sep 2005
Well, when you're generating PHP with no PHP-Book on the shelf nearby, you must find out all the oddities yourself, I guess...

For the echo-Function, there seem to be several differences depending on if you give it single quotation marks or double. So here is yet another of my new coding conventions:
To be able to debug the HTML output, new lines are needed to indent it. Interestingly, PHP will escape "\n" when single quotation marks ('\n') are used, so then "\n" will appear in your browser.
I use single quotation marks because they can wrap up the double quotation marks I need for the HTML output (mostly for attribute values). It won't work the other way round (echo with "", attribute values with '') because within the XSL transformation, single quotation marks become double quotation marks (sigh).
So I use echo('...'); as standard, and put an echo("\n"); after each line, when I want a newline in the output.
After a little while I got used to it, it's not that bad. And if it is explained in the comment, it's the best solution I can think of...
# lastedited 26 Sep 2005
04 Sep 2005
My Website-Generator keeps me busy during the free summer time, alright...
The input will be an XML file in which just the data structure and a few other things are specified. The generated Output is PHP (which -once on the server- will generate HTML). So we have a nice mix here...

In the code there are mixed code snippets from XHTML, PHP, SQL, all wrapped up in the transforming XSL. So, above all, this is XML because the XSL is really just yet another XML format...
All the XML-special chars that are also used in the XHTML or PHP or SQL (<,>,&) are likely to cause problems in parsing here. While it is apropriate to use CDATA-Sections when the code (HTML or PHP) is stored in XML, in XSL I think it is better to use xsl:text and disable output escaping.
For the freaks, here is a bundle of points made by the experts: http://www.dpawson.co.uk/xsl/sect2/N2215.html#d3457e277

I made up two coding conventions that make it possible to keep most of the code as-is. This should help a lot with the readibility of this code, as I tried hard not to make "<" out of all "<" in the HTML. After all, Syntax Highlighting is a fancy useful thing... So here we go:
  1. use xsl:text whenever a special char is NOT used in XHTML that can be properly parsed (that means: it's valid). So when we have the operator smaller-then ("<") in PHP or SQL, or the PHP delimiters (""), use <. Also do this when xsl:value-of - elements are coded within a XHTML-Element to dynamically fill in values. The WHOLE XHTML-Element then has to be masked.
  2. When you echo out XHTML-Code that uses quotation marks (e.g. for an attribute value), use single qt. marks for the echo-function and for the XHTML string use whatever you like. Either way, it will be double qt. marks in the oputput! Some portions of the code will have to be less readable, but only few. Another Consequence will be that PHP-Vars must be outputted in their own call to echo.
# lastedited 28 Sep 2005
19 Aug 2005
Plans for the website go into the interesting phase. I think about a cool domain I could use. I found that all those iSomething-terms have really been used to death (just try to google "iMind", "iChange" and the like). Maybe I should register my name as well as something cool, like www.mag.net (which also seems to be gone without being used) ... I also try to have kind of a topic under which I write. Although a blog can entail just about anything, essays should not and it's a good thing to have a mission. Which just means that you have "something to say". Do I have "something to say"? I am not too sure, so I'll try to bootstrap myself out of that problem - by writing I'll see. But a topic doesn't come that way. I should define what I'll try to say in an essay. And the technical stuff. I think I'll do a lot of things dynamically, with PHP and a database, so I can write from anywhere. Since my provider only makes PHP possible I am stuck to that, although I really would like to try Python... I'll generate all the boring PHP that just shows data or gives me the maintenance stuff. By that I can refresh all my XML/XSL skills.
#
16 Aug 2005
I just listened to Jamais Cascio on IT Conversations. His Future Scenario of Little Brothers everywhere is intriguing and one I did not find in "Radical Evolution" (although memory enhancement was surely a topic - memory enhancement just means that you have some kind of recording devices with you. One day they'll be so light and small you don't notice them anymore. And they'll never need to stop recording because memory is so cheap).
As everybody enhances his or her memory, lying gets more and more difficult. That is not only due to the pure mass of data that might be recorded of every event. It's also about availibility.
Cascio thinks about social networking applications that human like to use to exchange. Humans just love to do gibberish, talk about one another, talk about what you like, talk even more about what you hate.
So you might stand in front of a restaurant that you never saw before and some little gadget in your pocket helps you find an opinion about it (maybe you'll even get pictures of the pizzas). And that opinion is held by someone you don't even know. You just know someone who knows someone who does. But in a cascade of little conversations between little devices that people fed with data (consciously or not), you will get your data, Cascio says.
And now think of asking for opinions (or any pictures) about people you just met. "What is your name?" tippeditipp "Ah, here you are as a 11-year-old in your underpants. Haha!" Thanks, mom...

If this scenario comes, and the reasons Cascio gave are good, there'll be a lot of crying about the misuse of data, a lot of learning what not to do and how to trust. Some things will feel different, but this scenario explains itself with human nature and this is why it is a good one.
# lastedited 21 Sep 2005