Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

20110226

AMD Open-Sources Video API

This is quite a leap forward for free and open-source software users everywhere.

20090619

NVidia Prefers WinCE to Android

There have been hordes of ARM-powered netbooks that have been popping out of the woodworks of Computex this year. One of which was touting the new NVidia ARM Tegra chip.

Here's a link to an article on Slashdot which reports that NVidia is not ready to back Android as a capable platform for Netbooks.

I unfortunately say that I must agree - although its not completely Google, or the Open Handset Alliance, or Linux that is truly at fault - or NVidia for that matter.

The problem is that NVidia would need to expose yet another kernel and user-space ABI (for their latest, integrated Tegra GPU no-less), and they are not prepared to do so. Aside from that, Android performs much of the hardware acceleration (a.k.a. DSP algorithms) for graphic and audio in a completely re-done set of non-portable libraries (the last time I saw the code), rather than using a single, portable abstraction layer such as OpenGL.

My recommendation? End-users should stick with WinCE (as well as the NVidia-modified UI) that will ship with the NVidia-based netbooks - IF THEY CHOOSE TO BUY A NETBOOK THAT USES AN NVIDIA TEGRA CHIP. Certainly, there are many other, more mature ARM chip vendors that will be offering netbook platforms (e.g. Qualcomm, TI, etc).

However, for almost all other ARM cores with unencumbered ABI's and API's for hardware acceleration - by all means use Android! The Android community, which is composed of literally thousands of developers, will have an exponentially greater ramp-up on new technology and software integration speed than NVidia & MS will, as single entities. Plus, as we have seen in the past, the release cycle for Android will likely be more frequent and since it's Open Source. Furthermore, there is less likelihood that Android will fall behind and become unmaintained (which was the whole purpose of the OHA in the first place), while the NVidia & WinCE combination will likely become unsupported and outdated at some point.

20090529

Why Windows Suffers From Bloat

Yesterday, I was trying to install a popular pocketpc weather application on my 'smart' mobile phone, which runs Windows CE 5.0. Linux is almost running on it without problems, so hopefully I won't be using WinCE much longer, but rather Android. In any event, the application required something in the range of 5 to 10 megabytes of program storage space, which was not available on my device because most of the space was already filled with other proprietary software. Nevermind the fact that my device has 128 MB for program storage and 64 MB of RAM, which is more than enough in my humble opinion, but 5 to 10 megabytes for a simple program to read the weather? I would say that is just slightly excessive.

Let me explain a little bit about resource utilization in computer systems. I feel that I can make the generalization to 'computers' here because a mobile device is basically a small computer anyway, with relatively less storage and working memory. Being a long-time member of the Linux ecosystem, I feel that I have been accustomed to (spoiled by?) the always-comfortable feeling that my permanent storage as well as working memory are 'very large' when compared with the amount I actually need to run any application. I never have to worry about running out of hard-disk space when I install a new program, nor do I have to worry about running out of RAM or experiencing large-scale system-slow-down if I have many programs running simultaneously. Both aforementioned problems plague most windows users who I know.

Why does that happen? The answer is technically a bit involved, but it can be explained with a very simple analogy; Open Source Software (OSS) shares code and closed source software (CSS) doesn't.

OSS developers are free to use, modify, and redistribute source (and binary) code. One of the benefits of this philosophy, is that several applications (however unrelated they seem) can share the same code for common tasks. For example, a media player would need code (some algorithm) to sort and list all of your favourite tracks in alphabetical order. Similarly, a spreadsheet application would need similar code to sort a list of names in alphabetical order. In the OSS world, both of these applications have the potential to use the same code to sort a list alphabetically (as a general example). The developer of the media player, the developer of the spreadsheet, as well as the developer of the alphabetical sort code are all able to help each other and improve a the alphabetical sort algorithm. They exist-in and contribute-to a common ecosystem where everyone benefits.

On the other hand, in the Windows world, similar programs developed by different companies are in a state of economic competition. For example, two different tax programs compete for customers, and (usually) the 'better' product wins. However, the problem even exists between companies that develop completely different applications for the simple reason that many programs use require the same generic algorithms for sorting lists, etc. Therefore, the closed-source software (CSS) ecosystem breeds an inherent distrust between its members, for fear that a competitor might 'steal' the algorithm and thus the potential revenue which that algorithm could generate.

Ok, fine, but how does this relate to computer memory and storage space?

In the Windows world, every program (each written by a different company) would naturally have a secret place where they store their code for alphabetical sorting.When the media player program is installed on your windows computer, there is a special file, or library, that stores the sorting algorithm. For every program that uses similar code, the storage space is duplicated, and we're only considering an algorithm to sort names alphabetically! When one considers the many thousands of algorithms that are stored, the storage utilization starts to look very inefficient. Even worse - its not just the storage (hard disk) space that's affected, but also the working memory (RAM) of the computer!!

In the Open Source Software world, this code resides in one place for the whole world to use and modify. Similarly, the code only needs to be installed in one place to a Linux computer, in a single file for any program that requires an alphabetical sorting algorithm. This is essentially the same thing that happens while the program is running in memory; regardless of the number of programs that reference the code, it only exists once in RAM (context is saved elsewhere). The same algorithm (code) requires a fraction of the working memory in a Linux computer as it does in a Windows computer, for the same number of programs. Sharing is good !!

The benefit of using dynamically-linked libraries (shared objects) vs. statically linked libraries, is old news for most of the world, including Microsoft. Developers benefit from code reuse, common bug / fix propogation, and of course reduced memory usage, among many other things. Ironically, Windows has supported DLL's for a very long time. However, in spite of the many benefits, most 3rd party commercial application developers will likely continue to use their own stacks instead of a communal one, so that their 'intellectual property' is not sacrificed. Microsoft has partially rectified this problem with the introduction of .NET, C#, and managed code, but there are still plenty of legacy applications out there using VC++, MFC, and the Win32 API that will never be migrated.

I would assume that increased code sharing is at least linearly proportional (in some useful range) to resource utilization efficiency. Dynamic sharing is much more predominant in an Open Source Software environment. Therefore Open Source Software environments exhibit dramatically more efficient resource utilization.

20081231

Shape-Writing or "Swyping"

Shape-writing, or "swyping" (instead of typing) seems to be the newest trend among touchscreen enabled devices. Of course, along with any new idea comes a slew of conflicting software patents that were accepted (where else?) at the United States Patent Office.

I believe that Apple, the original T9 creator, and Shape Writer Inc each have patents for what is essentially the same 'technology'. Please humor my quoted use of the word technology, because I am one of several billions of people who do not believe that software (which is what this is) should be patented.

This technology is very interesting in that it uses 'intelligent algorithms' to determine what word the user is trying to write. I thought that I would outline the fundamentals of these intelligent algorithms, just in case anyone in the FLOSS universe would like to implement them for a fun project in their spare time.

First of all, let's define the path that the users' finger or stylus traces on the touch screen. There exists 1) a starting point, 2) a finite number of 'corners'. If we assume that such a trace or path lies atop a coordinate system, for example, the imaginary plane, then the sequence of 'corners' simply become ordered coordinates in the complex domain.

Let's use a box, for example.

[ (0 + 0j), (0 + 11j), (11 + 11j), (11 + 11j), (0 + 0j) ]

The above set of ordered coordinates traces a box, starting at the origin, and continuing clockwise. The area of the box is (11x11) units squared.

In fact, the sequence of ordered coordinates comprimises a signal in the complex spatial domain (i.e. a 1-D signal), and as such (making several rudimentary assumptions), it can be mapped to a different coordinate space (the frequency domain), using the DFT (discrete fourier transform).

The most interesting part of this, I find anyway, is that taking the IDFT (inverse discrete fourier transform) of the 0-frequency component of such a signal, actually the 'mean', results in a single point lying at the geometric center of the box (i.e. the center of mass). Furthermore, the IDFT of the first two components result in a ellipse containing all points defined in the signal. As one would expect from summing successive components of a fourier representation, IDFTs of the next successive components result in a shape that increasingly resembles the original signal (i.e. a box).

It's pretty cool, at any rate. I had a lab excercise in Grundlagen der Geometrische Signalverarbeitung (Introduction to Geometric Signal Processing) in my first year at Uni-Kiel, which used a Christmas tree as an example.

In any event, if you define several points of interest in the complex domain, by overlaying the center-points of an on-screen-keyboard, then by careful application of Z-domain filtering, it's possible to determine the exact word that a person is trying to 'swype'. The Z-domain filtering is linear, and also unfortunately non-linear in nature. Why? Well, if you consider that corners represent signal components, then if a letter included in the signal just happens to be underneath the path, where no corner exists, then it's as if a component in the signal has been 'lost' during 'transmission'. It's as if higher frequency components are actually filtered out, so the 'intelligent' part of this algorithm is trying to associate a signal with another signal containing higher frequency components. This can be done, for example, by maximum likelihood or MMSE methods.

Alternatively, given a large enough sample group of input / output pairs, this problem can be solved quite easily using a lexicon or word association database. Given a path with a certain number of components, or corners, one should be able to do a very fast search to look up associated words in order of decreasing probability.

Ta Da!

I'm fairly certain that this 'technology' will be implemented in Android. However, I would also like to see it in the OpenMoko project.

Maybe such an app has already existed in the open-source world for some time. If not, or if anyone would like to volounteer to write such an app, then I would be very open to further explaination. Just submit a comment below.

Update (20100412): It appears that Samsung has integrated the idea of 'swyping' into Android on their new(ish) Galaxy S devices, as you can see from the video below. Very Cool.

20081021

Hello, Open-Source World!

It's official... Google's brainchild, Android, has finally said
Hello, Open-Source World!

I wonder how many hours it will take for the first Neo 1973 or Neo FreeRunner port to surface. The biggest challenge, it will seem, will be to provide a bridge between the (minimal) ARMv5TE instruction set that Android was designed for, and the OpenMoko handsets' ARMv4T instruction set (as present in the Samsung 2442 SoC).

Perhaps the next handset that OpenMoko releases will feature native ARMv5TE compatibility.

Update: I've been building android for the last few hours, having made a few build-oriented changes that I think will help bridge the ARMv5TE - ARMv4T gap. I'm going to list a few of the errors I've been running into below. Please note - although I only list each undefined instruction once, the errors occur in multiplicity and in different subdirectories. I will post ARMv4T compliant work-arounds soon. Please be patient.

  • bionic/libc/arch-arm/bionic/memcmp.S:44: Error: selected processor does not support `pld [r0,#0]
  • system/core/libpixelflinger/t32cb16blend.S:121: Error: selected processor does not support `smulbb lr,r7,lr'
  • system/core/libpixelflinger/t32cb16blend.S ... Error: selected processor does not support `smulbt ...'
  • external/jpeg/jidctfst.S:148: Error: selected processor does not support `smlabb r0,r2,r3,r5'
  • dalvik/vm/arch/arm/CallEABI.S:239: Error: selected processor does not support `blx ip'
  • dalvik/vm/mterp/out/InterpAsm-armv5.S:2653: Error: selected processor does not support `ldrd r2,[r0,#offStaticField_value]'
  • dalvik/vm/mterp/out/InterpAsm-armv5.S ... Error: selected processor does not support `strd ...'
  • external/sonivox/arm-wt-22k/lib_src/ARM-E_mastergain_gnu.s:77: Error: selected processor does not support `smulwb r4,r4,nGain'
  • external/sonivox/arm-wt-22k/lib_src/ARM-E_voice_gain_gnu.s:114: Error: selected processor does not support `smlawb tmp1,gainLeft,tmp0,tmp1'
  • external/opencore//codecs_v2/audio/aac/dec/src/calc_auto_corr.cpp
    /tmp/ccBi9nUH.s: Assembler messages:
    /tmp/ccBi9nUH.s:652: Error: selected processor does not support `clz r0,ip'
  • smultt, smlatt, smlawt, smulwt, qadd, qsub, qdadd, qdsub, smlabt
  • etc, etc, etc ...
Build problems that were not architecturally related:
  • out/target/product/generic/obj/SHARED_LIBRARIES/libdvm_intermediates/Misc.o: In function `dvmAllocBit': dalvik/vm/Misc.c:247: undefined reference to `ffs'
Theoretically, there are three possibilities to in order to have Android on the FreeRunner.
  • The first is to ] replace [ the red-highlighted mnemonics using an equivalent ARMv4 or ARMv4T instruction sequence. In some cases, this is impossible without a lot of context information.
  • The second is to completely re-implement each section from scratch, wherever one of the ARMv5TE instructions are issued, but using an algorithm optimized for the ARMv4 or ARMv4T architecture
  • The third option is to just remove it, e.g. for the pld instruction which only optimizes memory fetches by hinting (or not?)