Saturday, August 30, 2008

Custom Drag and Drop for Lists in Flex 3

Yesterday I spent far too long trying to figure out why my custom dragDrop handler for the Flex 3 mx.controls.Tree class wasn't being called.

I had all the relevant callbacks (dragDrop, dragEnter, dragOver, dragExit) registered, had trace() statements in place so I could watch their behavior, and had dropEnabled set to true on the Tree. I could see trace() output coming from my dragEnter, dragOver and dragExit handlers. Still, I was getting nothing from the only one which really mattered: dragDrop. I tried playing around with DragManager.acceptDragDrop, tried calling preventDefault() as the first call in the event callbacks, and various other voodoo, but all to no avail. What could possibly be wrong?

The answer: in order for the dragDrop callback you set to actually get called, you need to set dropEnabled to false! The property described by Adobe as "A flag that indicates whether dragged items can be dropped onto the control" has to be set to "false" in order to actually receive the dragDrop event. Go figure.

The semantics of this property are, in reality, "A flag that indicates whether to use the default behavior for accepting dragged items, which is to assume that the object being dropped here is the object we expect, and to accept all. Set to false if you want to override this behavior."

This isn't just true for the Tree class, either. This is true for all controls which inherit from ListBase, which is quite a few, including HorizontalList, TileList, FileSystemList, Menu, Tree and DataGrid. Hopefully in Flex 4 sanity will prevail and they'll rename this property something a little more intuitive.

Side note: I also considered the title "Worst Property Name Ever" for this post.

Monday, August 11, 2008

How to crop images with opencv in python

I just spent way too long figuring this out, so I figured I'd contribute to the google knowledge:

cropped = cvCreateImage( cvSize(new_width, new_height), 8, 3)
src_region = cvGetSubRect(image, opencv.cvRect(left, top, new_width, new_height) )
cvCopy(src_region, cropped)


You'd think this would be easy and/or documented, but nope!

Thursday, August 07, 2008

Advice for Startup Marketing Advice - Learn Statistics

An interesting blog post, entitled Startup Marketing Advice came across my delicious network this past week. In this post, the author espouses the merits of the scientific method to determine catchy and effective marketing messaging in a cheap and effective way.

I have no problem with the thesis of the post. He's completely right that A/B testing is a terrific way to determine whether one message is better than another. Every elementary schooler knows that a good experiment has a control group and a test group, and the way to show that the variable under test is significant is to demonstrate that the test group responds significantly differently from the control.

The problem, though, is that the example given in the post doesn't prove anything! We'll look at the top two messages in his adwords result list ("Startup Marketing Advice" and "Marketing with Adwords") and show that there is no significant clickthrough rate difference.

The first problem that should stand out is that the number of clicks is so small that the proportions cannot be accurately estimated. The general rule of thumb for estimating a proportion is that you need at least 5 positive and 5 negative examples. The trial with the most clicks in his experiment only received 4 clicks. Therefore the CTR%s given by Google hardly mean anything - the standard error (calculated sqrt(p(1-p)/n) for a binomial proportion like CTR) can't be accurately determined, but is sure to be nearly as large as the CTR itself.

The second problem is the assessment of statistical significance between the different choices. The standard error of the difference between two estimators is sqrt(S_1^2/n_1 + S_2^2/n_2). Calculating this for the top two adwords campaigns, we see that the standard error for the difference between their clickthrough rates is somewhere in the vicinity of 0.35%. [Again, the statistics here are all inaccurate because we've already failed the "minimum of 5 clicks" rule of thumb, but we'll ignore that.] Because the measured CTR difference is 0.55-0.30 = 0.25, we calculate a Z score of (0.25 / 0.35) = 0.71. Looking this up in a table of the normal distribution we can find out that this is equivalent to about 75% significance that campaign 1 is better than campaign 2. This doesn't sound too terrible, but it's hardly scientific, especially when you consider the small sample sizes as mentioned earlier.

Note: I am not a statistician. I just play one on a blog.