This seriously encourage people to stick with outdated technology (it is debatable, but this is my view) and not move on to program directly in future technology (HTML 5). Well, this, obviously, is what Adobe wants. However, Flash is never a good platform to develop with. I’ve heard some experienced Flash developers (and mind you, these are the good ones, not your run-of-the-mill Flash developers) complaining about the shortcoming of Flash technology. Why should we let programmers get lazy and stick with lousy platform when there is momentum to force them to move on to newer, cleaner technology. (Yes, part of the problem is the developers themselves; lots of lazy ones out there.)

Also, knowing that Adobe does ship very buggy Flash Player, I’d doubt that their Flash-to-Javascript/HTML5 compiler would be any good. Extrapolate a little and we might see a wholesale exporting of Flash games to HTML5 in the future. Oh gosh, I don’t want to imagine that world. This is what I see: Javascript code exported from Flash with lousy performance, probably pollutes the global namespace, and likely to be bloated as well. This will cause the web to crawl as slow as ever when developers could have moved into faster, cleaner technology. (On another note, Microsoft demo-ed its IE9 recently with hardware acceleration to render HTML 5 canvas and animation, uber cool. Other browsers will definitely go there as well.)

At the end of the day, what we are missing today is backward-compatibility of HTML5 canvas to browsers with no canvas implementation. What we don’t miss: forward-fitting old, outdated technology to newer, cleaner HTML 5. So if anything, I would want to focus on writing a backward-compatible Javascript abstraction of canvas. Enabling substitution of HTML 5 with older technology (yes, I’m talking about Flash). We’ll see whether I have the time to take that up.

Apple might not do everything right, but I totally support their attempts to stamp Adobe out of iPhone/iPad, including the infamous change of Section 3.3.1 on top of not implementing Flash at all in iPhone and iPad.

]]>C++ objects are expressed in terms of multiples of the size of a char, so by definition the size of a char is 1. Here is the guarantee provided by the language:

1 ≡ sizeof(char) ≤ sizeof(short) ≤ sizeof(int) ≤ sizeof(long)

1 ≤ sizeof(bool) ≤ sizeof(long)

sizeof(char) ≤ sizeof(wchar_t) ≤ sizeof(long)

sizeof(float) ≤ sizeof(double) ≤ sizeof(long double)

sizeof(N) ≡ sizeof(signed N) ≡ sizeof(unsigned N)

where N can be char, short int, int, or long int. It is also guaranteed that a char has at least 8 bits, a short and an int at least 16 bits, a long at least 32 bits. A char can hold a character of the machine’s character set.

Certainly interesting that I don’t remember half of this. Some of them are what we already know from basic computer organization class (where some of us learned C), but my memory doesn’t last that long.

P.S. This is also certainly very different from Java, where the basic “1″ is actually an int instead of a char.

]]>The best part is: more than 2000 pageviews in the past 9 days! And that’s without much publicity yet! Insane!

P.S. Btw, yes, the post title has ambiguous grammar (well not really, if you are speaking formally, but in informal speech, it is kind of ambiguous).

]]>Didn’t take long before a few of them got the invites soon afterwards. I think it takes the Wave team at least a day or two to send an invite once you nominate your friends. Anyway, now we have more interesting stuffs to do.

The first wave: I started a wave on the recipe of the chicken katsu my girlfriend and I cooked last weekend. I might not look like it, but I do cook often enough, and the katsu was really good! Even my girlfriend’s sister, who has a high standard for food, liked it. Well, back to the point, it was kind of fun to write the recipe in Wave. But wait a sec, I can do that just using Google Docs, can’t I? Or even this blog. Well, let’s put that thought on hold for a moment…

The second wave: my girlfriend went online and she started playing with her Wave account. She was initially confused. So we both started a wave (oh, and I shared the katsu recipe wave with her) and started experimenting. This is where Wave really shines. We were able to edit any of the sub-waves together and the changes appear live on the other person’s Wave account. I can see exactly where she’s editing and can even reply to her post before she finished writing. We also played (well, I did) with the playback function. Really cool!

The third wave: well, I am supposed to write an article for this small blog-based publication that my friend is running. Since she has a Wave account too (and, like me, has no idea what to do with it), I suggested that we might want to try writing the article as a wave. My idea would be to write the article (outline and the actual article), brainstorms, and have her editing the article as necessary. Thought that is a neat idea, but we’ll only be able to test that next week since I’m still kind of busy this week. Seems fun though! Imagine live collaboration on an article, bouncing ideas through a wave or gtalk, while writing the article at the same time. Plus, I’ll be able to see her edits appearing live on my wave.

Well, anyway, back to the thought that I put on hold some moments ago: Google Wave equals Google Docs? Is that true? Well, yes and no! Wave can indeed work like Google Docs (well, right now minus the spreadsheet/presentation stuffs, but I bet there will be a gadget to take care of that soon). And that is indeed the point! Wave is Google Docs + GTalk + GMail + wiki! It’s like this all-in-one, all encompassing collaborative solutions. Cool? Hell yeah. Will it work? Only time will tell.

Well, I had fun waving today. But I shall stop here. Still have a presentation to write for Monday and I need to get a draft of it by tomorrow. ):

]]>We will first consider the first variant to arrive at optimum answer.

A simple answer would be to perform brute force search in the array, considering each position as possible starting and ending point. Time complexity: O(n^2). Not very good eh? Can we improve on this? Yes, we can if we use dynamic programming. For many of us, the algorithm came up as obvious, but I realize that it is really not that simple for students new to programming to come up with a correct algorithm. To answer this, I tried to explain the concept behind the algorithm: for each element of the array, we find the maximum value sub-array that ends at that element. This value can either be the value of the element itself (meaning starting at the element and ending at the particular element) or the maximum value sub-array ending at the previous element added to the value of the element. (The latter only occurs if the maximum value sub-array at previous location is not negative.) There you go, we have a recurrence relation: given an array A, let v(k) be the maximum value of any sub-array ending at position k, where k ranging from 0 inclusive to N exclusive, N is the length of the array. Hence we have:

v(0) = A[0]

v(k) = max{v(k-1) + A[k], A[k]} if k > 0

The maximum value will be given by max{v_i} for i from 0 to N-1. To implement this in Java, we make several optimization. Firstly, we can opt for recursive method or dynamic programming. Both will take O(n) time but since we’re using Java, we would rather use dynamic programming since there is no tail call optimization in Java. Then, rather than finding the maximum value using max{v_i} at the end, we keep a running maximum throughout the running of the algorithm and return it. We also note that v(k-1) + A[k] will only be greater than A[k] iff v(k-1) is greater than 0. Here is how the code goes (this is a rather faithful implementation, it uses the same variable names as the recurrence relation above to make it easier to see):

public static int calculateMaxValue(int[] A) { int[] v = new int[A.length]; int max = v[0] = A[0]; for (int i = 1; i < A.length; ++i) { v[i] = A[i]; if (v[i-1] > 0) { v[i] += v[i-1]; } if (v[i] > max) { max = v[i]; } } return max; }

This algorithm has O(n) time complexity.

What else can we do to optimize this code? Well, we don’t really need to keep an array of maximum values. We notice that at any point of time, to calculate the value of v(k), we only need the value of A[k] and v(k-1). We just turn an O(n) space requirement to O(1).

public static int calculateMaxValue(int[] A) { int max = A[0], current = A[0]; for (int i = 1; i < A.length; ++i) { if (current < 0) current = 0; current += A[i]; if (current > max) max = current; } return max; }

Voila! We shall continue with the second variant next time. (:

]]>Anyway, a friend of mine is currently applying for jobs in several “big” companies (you know, the Google-Microsoft-Apple kind?). So he was preparing for his interview by hunting quite a few online questions. One of the questions he asked reminded me of a question I used to use to exemplify the kind of questions these companies used in their technical interview:

Given a head pointer to a (possibly very long) linked list of unknown length, devise a method to sample m members at random. Ensure that the method yields equal probability for each element in the linked list to be picked.

Obviously we can traverse the list once to find the length, generate m random numbers from 1 to length of the list and traverse the list another time to pick up these items. A common question would be to prevent you from going through the list more than once. Say the list is very long, the disk I/O you need to get the items from harddisk (even after making sure the items are arranged well that it minimizes cache misses) would make traversal a very expensive operation and hence should be minimized.

The solution I have is inspired by the fact that if the linked list contained m or less elements, I have no choice but to pick all m of them. Hence, the main idea of the solution would be to pick the first m elements and keeping track of k—the number of elements we have seen so far. When we encounter a new element, the new element has an m/k chance of being picked. If it is picked, we dropped 1 of the previously selected m elements at random. We can easily prove that this indeed yields the correct probability via mathematical induction.

Now of course it won’t be fun to implement this in Java or C++, so I decided to use Scheme to code this, just for the fun of it. The solution is surprisingly short and pretty. Here goes:

(define (sample-m lst m) (define (helper curr k m-vec) (if (empty? curr) (if (< k m) lst (vector->list m-vec)) (let ((pos (random k))) (cond ((<= k m) (vector-set! m-vec (- k 1) (car curr))) ((< pos m) (vector-set! m-vec pos (car curr)))) (helper (cdr curr) (+ k 1) m-vec)))) (helper lst 1 (make-vector m))) ;;; Let's test this! (sample-m (build-list 1000 values) 7)

EDIT: I decided to try writing a version for Java as well. Here is the Java code:

public class LinkedListSampler { private static Random rand = new Random(); public static <T> List<T> sample( LinkedList<T> list, int m) { ArrayList<T> samples = new ArrayList<T>(m); int k = 0; for (T element : list) { int pos = k++; if (pos < m) { samples.add(element); } else if ((pos = rand.nextInt(k)) < m) { samples.set(pos, element); } } return samples; } }

EDIT 2: Btw, seriously, the Java code is just for fun. Just imagine that LinkedList does not have a size method, will ya? ;)

]]>Anyway, this one is the cutest! Shit! The cat and the video is just so cute!

(Watch the other 10 videos from YouTube’s Chrome channel.

]]>