Probably the Best

So it’s been about a month since Obama took office here in the USA and I have to admit as skeptical as I was at fist, he’s been doing things and tackling things in a way that makes sense.

This is a win for greater social change, evidence based policy making and a return to the values laid down in the constitution. I can’t imagine right now a better person who could have won and that’s saying a lot.

Having lived through the 1997 New Labour win, that bitter taste after 10 years of conservatism packaged up in a red rose. I had lost any hope that anyone in government could be remotely honest about their intentions.

Oh and I also think this administration will be good for FOSS too, it might not be on the top of their list (nor should it) but I’m sure with IBM’s old open source guru in there, they’ll be able to effectively utilise this software and maybe even it’s production values.

Hidden Developments

Today I decided to post a message to the ubuntu developers mailing list, seems this new notifications stuff is causing a few ruffled feathers.

If you go back to my post ‘It’s the Infrastructure Stupid‘ I use the new nofications system as an example of good thinking when it comes to building foundation components for ubuntu.

But here is where my criticism comes in. The notifications system is being developed by the Desktop Experience (Dx) team (not to be mistaken with the Desktop Team), this new team of developers is charged with bringing radically new ideas to the ubuntu desktop.

The problem with this team is that it’s an internal Canonical team that has been hidden away, deliberately cutting ties with the community development process in order to get some OEM jobs done. I’m not sure if this was a management decision or something about the people involved in the team. But I do know it’s a mistake to believe you can develop highly visible components of a radical nature outside of the community and bring them in by force.

It doesn’t matter technically how good a job the new team does. So long as their ideas and development progress is all focused in on it’s self and as long as they are seen to have the utter arrogance that only they can do radical things. Then the community will put up a fight every time they bring something out.

See this is just human nature, if you see some alien group forcing changes in your community, you resist.

It doesn’t help that the people involved on both sides are now emotionally driven. The DX team will be ego centric and heavily invested in their design of their work and the rest of the community will be horrified that they’ve been left out of the development process. Egos on both sides will be hurt.

This is something Jono Bacon should be worried about, he needs to get Canonical employees to think of themselves as a part of a larger community than just the Canonical business. If he can’t, the community will be hurt by Canonical throwing it’s weight around because it’s too out of touch to know what’s going on.

Each individual must have ties with the community in order for the business as a whole to get a long well and position it’s self in mutually beneficial ways.

Posted in Ubuntu3 Comments on Hidden Developments

How I think Cartoons Work

I’m a bit tired today, been suffering from a cold these last few days. So prior apologies for more mistakes than usual.

I was thinking about how cartoons and drawn symbology works. The first thing that I though was ‘Is a cartoon dog just a symbol of a dog?’ and I thought well no, you can see it’s tongue sticking out and it’s piddling on George W Bush. There is clearly more to the drawing that just symbology.

So I thought about how the eye works, there are multiple components to the eye and a hell of a good pre-processing layer of neurons within the eye. You get out of this a number of layers, a colour layer, a luminosity layer and interestingly a edges layer. The edge detection seems to incorporate stereoscopic vision into a map of where all the various objects you see start and stop. I was looking at some software that mimicked this once and noticed how the outside of a face had huge thick green and red lines between the face and the background, the inside had smaller lines denoting the distances between each of the objects.

Given that we have a brain that likes to mix signals up with synaesthesia. It might be that the black lines used in cartoon images are a replacement for this stereoscopic information and that is why to me at least, some cartoon images look more 3D than a photo of a person on a similar shade background. Although it’s obvious that the cartoon is not a picture of a real thing and the photo is, my brain can’t shake the idea that the cartoon has spacial information beyond what my brain expects.

If this is true then I should expect a number of things of cartoon drawing:

  1. Outside lines should be thicker than inside lines and should vary with depth of space behind them.
  2. Objects that change depth (like noses) should use tapering lines that thin out as they get closer to the object behind them.
  3. Shading/colouring is not required to produce 3D objects in cartoon form.
  4. Lines should get thicker as the object moves closer compared to similar objects.
  5. Bluring a typical cartoon will not make it look out of focus unless the line widths are changed.

Now I’m going to have to start testing these conjectures, firstly with a cube:

And this image for a face, the first with all the same line widths, the second with lines designed as if the thickness was related to the depth.

If you can see more depth or if you can’t, post a comment.

Posted in Science4 Comments on How I think Cartoons Work

Advertising

There is a whole other world (read community) where ubuntu people and FOSS programmers like to hang out and do their work to the one you buy your groceries and borrow books from the library.

If some smart arsed advertising executive was to come into our community to give us advice on spreading the good book… er I mean cd. Then what kinds of things would he say?

Demographics, that’s the key, you’ve got the techs, geeks and a good mind share of the nerd market. There’s also a lot of managers, top brass in big firms that have some idea that you exist and it’s possible they may even know something about why.

Getting out from these narrow demographics and getting into peoples lives will require focused efforts and a tailoring of the pros and cons that make sense to that world. I teach a great deal of under classed people (minorities, the poor, etc) and the whole political under dog message of Free Software works well, even though best advice for those who serve their middle class neighbours is to never mention it for fear of looking like your trying to have an political opinion about software.

Branding, is most important, the Linux brand is tarnished and outside of the existing tech geek demographics, is not worthwhile. It’s MUCH easier to get people to try Ubuntu when they don’t know it’s got Linux in it, then if it has. Just ask those Microsoft folks, Vista and Windows brands are tarnished too. They’re going to power on with ‘Windows’ because they can change the whole look and version of the thing and make it look like something shiny and new and it already has great mind share and an ever persistent link between people who think PC means Windows PC.

So how do you make a brand that can cover the great available software in advertisements. The driver software availability for hardware OEMs (think tux on the mouse box) and the one you use on the distro it’s self? All those linux adverts currently being created in the Linux Foundations competition can’t be that useful, because they’re all going to be trying to push ‘Linux’ when that concept makes no logical sense and it’s difficult enough to explain what it means.

Excitement, It’s true, we can get techies excited about a new release. Even though FOSS moves at a glacial speed, if we can still get people excited, then we can still get them to try it. If we’re too slow, then they’ll try Debian Woody, figure it’s too hard and never return.

Symmetrical Conceptual

According to some new research, synaesthesia can be induced by hypnosis.

Synaesthesia is an abnormality in some people that causes them to mix up their senses. Some people hear colours, taste music. Then it turned out people could also link conceptual ideas to the senses, some people would see colours when they see a number. It’s now thought that this kind of rewiring on concepts accounts for savant abilities; one savant who remembered very many digits of pi, later said that he sees numbers like a terrain topology and flying through that terrain allows him to remember all the numbers.

The discovery that you can induce it comes at no great surprise to me. Programatically the brain seems to be in a permanent state of self rewriting (called neural plasticity) and when you turn off certain input validations with hypnosis, you get to make the brain do weird things like link ‘2’ to ‘yellow’.

This doesn’t at first seem like such a useful ability; but linking one concept to another is immensely useful for creating systematic knowledge of how things work. Think about a wheel, we link it to ’round’, ‘spin’ and ‘roll’ which tells us all we need to use it as a mechanism by instantly indexing a concept to it’s uses we save vast amounts of thinking time.

This neatly brings us to programming. Programming seems to be an intensified version of this conceptual linking skill. Being able to conceptually pull functionality and mechanics (APIs) together. Being able to create new mechanisms, and so on.

It will be interesting to see if we can use this information about brains to better program our interfaces. When you call your method gwaaztze(), that’s not very useful because you need to create an unconnected reference (unindexable) against it’s mechanics and functionality to remember what the hell it does. But if we follow existing concepts we could rename it to get_content_pointer() and we can instantly place it’s mechanical properties without looking it up. We may still have to look up it’s precise definition, but at least we can get a topology of the API without a manual.

I also think this is why advanced none technical users in ubuntu like to remap ‘apt-get install’ to just ‘install’ on the command line. To them it makes no sense that you would use the noun of the package system, followed by two verbs that almost mean the same thing. It should be obvious or not important what the package system is, and any one verb should be enough, say ‘software install [package]’.

Perhaps what we mean by ‘ease of use’ outside of ‘stop trying to use it exactly like windows’ means using existing conceptual links in a way that gives an indication about what the program is trying to do and thus allow our users to index the mechanics and functionality without having to create static memories of commands and interfaces.

The Revolutionary Problem

I was talking to a good friend of mine last night about one of my previous blog posts.

It’s no secret that I don’t believe the mechanical scalability of the support model in Free Software. I’m not even too sure of it’s directedness in how it orientates the organisation employing it towards the work it think it needs to do.

But I can be convinced of it’s usefulness as a leveraging device. let me explain:

In the current software industry and community we have a problem, it’s a great big fat one that is hurting how people use computers and how computer technology is allowed to progress. This problem can be neatly summed up as ‘Microsoft’, it doesn’t have to be them, it could be Apple in a few years or IBM back in the 80s. It’s a huge monopoly with vast technical, legal, governmental and monetary leverage. A company that tells everyone what they will use on their computers (or as their computers) by force of removing everyone else from the market place who could possible offer an alternative, by defining de jour standards that only it controls and understands and by making fools of us.

Normally the government or market would stamp down on this problem, because monopolies do horrible things to themselves and others. Much like anyone given too much power. But this time, that legal mechanism was allowed to fail.

Now people in the Free and Open Source Software community want Software licensing to give the customer and society proper and useful rights to the use, modification and distribution of software code and their derivatives. This change in production is nothing short of a revolution. It may not even stop at a software revolution, it may and appears to be, turning into a fully fledged information ‘production’ revolution.

So getting rid of the existing hegemony will take quite a bit of effort building the kind of required counter leverage. Most of it coming from volunteers and the naturally more efficient processes that the licenses allow. Some of it can come from invested parties or angel investors, some may even come from proxy funding like the support funding model.

But you do need something to replace your mechanics with, once you’ve managed to get rid of the old guard. Once you’ve managed to remove or assimilate Microsoft, Apple and Adobe (that list is growing small all the time, ain’t it) you’ll need to have concrete, scalable and customer facing mechanisms for funding the progress. I don’t believe the support model, or the online services model has a place here.

I also see danger signs when a typically FOSS company needs to have any closed source software in order to protect revenue (this is just a leverage to increase the scaling of the proxy fund). That means: a no to proprietary extensions, a no to other products that are enterprise ready that you misrepresent to companies to convince them to shell out big money for (mysql I’m looking at you) and a no to tying trademarks to copyrights.

These devices might be required to win the revolution, but I think they’ll be a hindrance come the time to scale this thing up world wide and in a way that every non technical user has a way to push the software forwards in the way they wish.

The self referential problem

If you’ve ever been to a philosophy class, you’ll know that there is one interesting issue about the meaning of existence that is still begging for answers.

This one is about free will, what it means to be in control of yourself, to decide and weather you as a being are deterministic and predictable (in some fashion) or can make choices which can not be determined by any scientific measurement.

Even scientist regularly stray into this area. The more neurology research done; the more we discover how deterministic we really are. Say for instance the amount of the brain dedicated to consciousness (the part most people consider to be the ‘you’). It looks like only small amounts of the brain are under your direct control; while lots of other parts of the brain are seemingly automatic, animalistic.

This puts a lot of people into a terrible problem. We, as animals, like to consider that what we do and how we act is totally under our control. That how we think has some baring on being good, moral, social. That this introspective control separates us from basic self serving animal natures.

Of course this all misses two very important philosophical points:

  1. That as a human being we have free will because we embody the deterministic mind. Our bodies are not attachments of the mental process. This embodiment allows us to be both deterministic and have free will.
  2. That knowing about how we work creates a self referential paradox. We will adjust our actions and motivations based on how we think we work and thus change how we work. Even to prove we’re not deterministic. Think about how many time traveling stories have protagonists deliberately poking and pulling the threads of time to prove that time can be changed (and by extension, the deterministic nature of who we are)

It may be that future scientists will work out some facet of the brain that is inherently quantum in state. But I think we’ll find instead that the quantum effect is not quantum physics, but the quantum nature of self awareness.

Job for life

The latest in the job market isn’t looking good. The number of jobs for programmers has shrunk (though not totally evaporated) and this depression looks like it’ll continue for another 1.5 to 2 years.

That’s a long time to go without money and I’m a spousal immigrant. Unable to claim unemployment benefits, burning down my savings and depending from my wife for survival.

It doesn’t look like the market for FOSS development will materialise any time soon either.

All of that is nothing compared to the wretchedness I feel about my last job position. Wrong position, wrong manager, incorrect working mechanics set up, sometimes it’s just hard to fit.

But anyway, on to the future, I’ve always got something to do in the FOSS world and it’ll keep me plenty busy until opertunities come back.

Posted in Hat Talk1 Comment on Job for life

Science on the Internet

As Document Freedom Day (DFD) is only a month away (March 25th), I thought it’d be an idea to reflect on the implications of free access to information and the ability to have that information in a standard open format.

In the traditional science world it seems they’re going through a bit of a revolution themselves. Long has it been that researchers would publish their documents through a respectable journal such as Nature. These journals can then only be read by other researchers at other institutes that pay a great deal of money to the publishers. The noteriety of the publication gets the author the credit and standing required to get jobs and advanced posts in her field.

But the internet has made this massive money making scheme look a little ridiculous. Any researcher could post information to the Internet, where it can be freely shared with (god forbid) freelance scientist and hobbyists who are not a part of the academic world any more, more importantly, people who are not in a position to pay huge sums of money for subscriptions.

I a previous blog posts about Free Software, I say how I’m very much in agreement with Cory Doctorow about the nature of Free Software and it’s parallels with scientific progress. Then how could it be that the traditional publications of scientific knowledge are themselves fighting against this most important scientific principle?

Also of great importance is the format in which this information is distributed. When I do some art work or some little icons, I always release the svg sources and export to raster pngs web or pdfs for printing. All of these formats are well documented standards which I trust with my creative content.

I would consider it poisonous for the use of any proprietary format to become common place, such a problem is the secrete formatting of a third parties data, that I would make it illegal for a software to do so, if I were in power. The rights of the creator, author and copyright holder are clearly more important than the trade secretes of the company involved.

The complexity and ignorance around technology means we can not use Caveat Emptor (Buyer Beware) as a serious defence against implementing strong customer protections in this area.

Setting up the Lab

Here in Boston I’m helping a local community center get set up with more advanced technology. Obviously I have access to Free Software and plenty of knowledge about how it works.

So the most important aspect of my task is training. No doubt about it that this place has lacked any sort of training or real maintenance for a long time.

I’m also researching some of the technologies that I’ve no previously had the opportunity to try out. Things like clonezilla, fog, tftp, dhcp servers and PXE booting, that’s what I’ve been up to this week. If I can manage to get something good working, I’ll get OpenLDAP and maybe some other tools for next week.