So NOMOS has this awesome looking Bauhaus style watch: Tangente. Apparently everyone wants one, but not everyone wants to break their bank ($2330 USD).
I write about technical stuff in the field of computer science, with a focus on Machine Learning, my hobbies, including guitar playing and ballroom dancing, and my life in US. Some of my posts are written in Chinese.
Let \(\data\) be a set of data generated from some distribution parameterized by \(\theta\). We want to estimate the unknown parameter \(\theta\). What we can do?
A few days ago, I encountered an issue which seems to be common among mid-2009 MBPs: one of the RAM (in slot 1) is not recognized anymore. Or, sometimes it is recognized, but after sleep and wake up, the computer freezes and impossible to recover but force power off.
Recently, my mid 2009 MBP (Model A1278) fails to recognize the hard drive. My first bet was another disk failure on me, but it was not the case. I took down the hard drive and put it to a mobile hard drive case and it can be read smoothly.
After blogging with Octopress for a while, I have already gained some insights on it, and my publishing flow has been smoother. I think it is right time to share my flow as a reference.
|Original||Minimum Energy Reconstruction||Sparse Reconstruction|
This is a follow up of the L1-minimization series. The previous two posts are:
This is a follow-up of the previous post on applications of L1 minimization.
Ordinary Least Square (OLS), L2-regularization and L1-regularization are all techniques of finding solutions in a linear system. However, they serve for different purposes. Recently, L1-regularization gains much attention due to its ability in finding sparse solutions. This post demonstrates this by comparing OLS, L2 and L1 regularization.