I have got several really cheap 2.4GHz electric skateboard remote controllers from Ebay or Banggood. These controllers are great, they are cheap and they are reliable. I have been using them for over a year, didn't give me a single issue, the radio link is very stable. I also play RC cars and boats, these controllers and the receivers can be used to control rc cars and boats as well. But here's one annoying thing, the steering direction is reversed!!! And, there's no documentation anywhere to tell you how to reverse it back. I tested all of them, all of them got it reversed!!! Hunch is there's a bug in the firmware and they all had that, and since most people only uses these for skateboard which only uses throttle channel, no one finds it out! But if you are like me who have a lot of RC stuffs, you'll be the victim. Fear no more, here's a good way to reverse it, which essentially works for the majority remote controllers.

Like this one: https://www.amazon.com/XCSOURCE-Transmitter-Controller-Skateboard-OS917/dp/B073GX83NH/ref=pd_sbs_468_1/140-1711874-6248254?_encoding=UTF8&pd_rd_i=B073GX83NH&pd_rd_r=20b77af5-62c7-11e9-b421-cdcde3290510&pd_rd_w=lBFsp&pd_rd_wg=woywU&pf_rd_p=763ccc93-bfa2-47be-85ae-0cdd7e00b3da&pf_rd_r=103CJR8HX4VF5E4P3A2Q&psc=1&refRID=103CJR8HX4VF5E4P3A2Q

To solve this issue, swap two wires. This applies to all resistor based controls. See the following picture:

To understand why this works, let's take a look at how these controllers work. The two wires we are swapping usually connects back to the battery's positive and negative. To see how much these nob is twisted, the micro-controller measures the voltage between one of the two wires and the middle wire, as for which one it is measuring, we don't need to care. Since one thing is for certain, this is essentially dividing the voltage between two parts, if one part increase, the other will decrease with the same amount. So once wire is swapped, the voltage will be swapped as well.

I have tested this on the controllers mentioned above.

Install CUDA 10 on Ubuntu

  1. Download cuda sdk from: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1810&target_type=deblocal
  2. On terminal run the following at the download directory.
    sudo dpkg -i cuda-repo-ubuntu1810-10-1-local-10.1.105-418.39_1.0-1_amd64.deb
    sudo apt-key add /var/cuda-repo-10-1-local-10.1.105-418.39/7fa2af80.pub
    sudo apt-get update
    sudo apt-get install cuda
  3. CUDA should be installed in:

    /usr/local/cuda-10.1

    With a soft link: /usr/loca/cuda->cuda-10.1

  4. Put the following into your PATH

    export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
    export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
  5. Verify with samples:
    cp -r /usr/local/cuda/samples ~/cuda_samples
    cd ~/cuda_samples/1_Utilities/deviceQuery
    make
    ./deviceQuery
    You should see your GPU info printed.

Have you ever wondered how to download a web page, either a beautiful design or has a great article? Ctrl+S? No way, it dumps a bunch of stuff and doesn’t work as you would expect 99% of the time.

If you use Chrome, I want to introduce you a great Chrome plugin Huula Web Clipper. It can clip 99% of the websites on the web with 100% fidelity. Clipped web pages are stored on the cloud, so you never need to worry about loosing them.

Better yet, if you feel inspired today, you can also redesign the clipped websites with drag drop widgets and mix them as you wish.

Combination of pattern with full screen background, Advanced usage of gray color, bold color blocks... This post summaries the 5 web design trends with examples. These trends have seen their fast initial adoption in late 2016, so it's almost inevitable that you'll see more and more of these trends in 2017. If you don't want to fall behind, read on!

07 Nov 2016

What does new Huula do?

It allows you to design on top of any web pages on the Internet. And it'll provide smart Machine Learning based web page synthesis, so you don't even need to design it yourself at all, just give it a few hints and go!

Sounds great! Where are you now? How do I access it?

We are at MVP stage. Click https://huu.la to be the first few. It's under active development, so expect a few bumps. But always drop us a line at huula@berkeley.edu if you need anything.

What will you do to the old services (i.e. website tours)?

Old services will continue to be supported, no worries.

23 Jan 2016

About Me

I'm a first year master student at EECS. Prior to Berkeley, I have worked at Morgan Stanley and VMWare as software engineer and frontend engineer. So most of my time have been spent with JavaScript, DOM, CSS stuffs. I also write "backend" code like Scala and Java too.

I'm interested in a lot of topics – high performance web apps, distributed systems, data visualization, machine learning, etc.

Application

Currently I'm trying to use machine learning techniques to learn the patterns of web pages. Not the text information of web pages, but the layout/design of it. We see all kinds of web pages all the time, but the layout are kind of not that random – they have their intrinsic patterns, for example, a title text is usually followed by several paragraphs of smaller text; a horizontal box on the top of a web page tends to contain a list of navigation elements; image element tends to be followed by a short description; with card layout becomes prevalent, boxes tend to be of the same width and has the same spaces in between. etc. Some of the patterns I'm not even aware of. So I want to use "big data" to learn and reveal those patterns for me.

Currently, I have a crawler that's collecting the DOM information of the top-million websites [1] ranked by Alexa. Since it needs to learn the DOM structure, so it's not as simple as an HTTP GET, there's plenty of work need to do. It also preserves the screenshots of all these web pages, hoping to find a correlation in the screenshots and the respective DOM trees. Or just do some simple analytics like what's the most popular color used in top-million websites, what will it look like when you cluster the top million website by color, etc. That can already generate a lot of interesting results.

My current approach on learning the pattern is to treat different DOM elements as different entities, and then a web page can be represented as a sequence of different DOM elements (with the tree hierarchy encoded). And then use LSTM (long short term memory neural networks) to fit the dataset I get.

I have about 100,000 web pages and their screenshots already and it takes about 60GB of space already, the required computation power is enormous too – my initial model would require an encoding space of several million, which on one hand needs some work to reduce the dimension (currently I'm considering similar techniques to Word2Vec), and on the other hand needs fast enough hardware to drive. So I hope to gain some skills from this course to cope with it.

Parallel computing is already used in the crawler. Since the crawler needs to contact remote servers, a lot of the crawling time is actually used in downloading the web page, and the associated resources like JavaScript, CSS, images and fonts, etc. If we do this sequentially, on one hand, it will only use one core, on the other hand even within one core, a lot of time is spent on waiting, which could actually be used to do computation or parallel fetching. So parallel crawling can make the whole process work like a pipeline.

In the screenshot clustering case, there's distributed K-means algorithm [2] which could be used to do compute the clustering in a parallel way.

As for LSTM, the parallelism mainly comes from GPUs, because matrix operations can be executed on GPUs in a highly parallel fashion thus gives about 10x speedup for LSTM training and evaluation.

Technology: NodeJS, PhantomJS, ES6, HTML, CSS, Python, Scala, etc.

[1]. https://www.domainnamenews.com/up-to-the-minute/alexa-releases-top-1-million-sites-free-download/3412

[2] Oliva, Gabriele, Roberto Setola, and Christoforos N. Hadjicostis. "Distributed k-means algorithm." arXiv preprint arXiv:1312.4176 (2013).

Review of "“One Size Fits All”: An Idea Whose Time Has Come and Gone"

Problem: In this paper, the author argues that the traditional DBMS has tried "one size fits all" but failed in face of data warehouse system and streaming systems and this attempt will continue to fail dramatically in the future as more and more diverse need of data storage comes out.

Key idea: In the data warehouse example the author cites that most enterprises have two storage systems, one stores the OLTP data, another just scraps data from this OLTP system and allow doing business intelligence queries on it. But the two storage system requires different optimization techniques, like bitmap index, materialized view. Common practice in vendor products include using a common front-end to cover two underlying bottoms, one for OLTP and one for data warehousing, but this structure makes marketing these product confusing. In the stream processing example, the author argues that traditional OLTP systems is incapable of handling this "firehose" of data generated by sensor networks. And the speed of the traditional OLTP systems is also not satisfying for real-time events queries. Through the analysis of different domains like sensor networks, scientific databases and text search, the author concludes that there will be many different domain specific databases come out in the future, and database systems are entering an interesting period.

Will this paper be influential in 10 years? I think so, it identifies the key reason why people are developing all kinds of different storage systems. And justifies new systems like Google F1 which aims to provide both OLTP and OLAP interfaces for upper level applications. This trend also generates more and more research opportunities in specialized storage systems and the processing frameworks on top of them.