Black and Decker Portable AC

I recently picked up a Black and Decker Portable AC https://www.amazon.com/Black-Decker-Portable-Conditioner-Display/dp/B01DLPUWG2 and wanted to port over a Blynk project I had created. I couldn’t find anything online about what the infrared protocol is for the remote. So here you go:

Protocol  : TCL112ACCode      : 0x23CB26010024030D00000000C009 (112 Bits)Mesg Desc.: Power: On, Mode: 3 (Cool), Temp: 18C, Fan: 0 (Auto), Econo: Off, HealProtocol  : TCL112AC
Code      : 0x23CB26010024030D00000000C009 (112 Bits)
Mesg Desc.: Power: On, Mode: 3 (Cool), Temp: 18C, Fan: 0 (Auto), Econo: Off, Health: Off, Light: On, Turbo: Off, Swing(H): Off, Swing(V): Off
uint16_t rawData[227] = {3134, 1586,  496, 1174,  504, 1166,  502, 334,  494, 366,  474, 362,  476, 1166,  502, 332,  496, 340,  498, 1170,  498, 1172,  496, 366,  474, 1194,  474, 336,  502, 358,  470, 1174,  506, 1164,  504, 332,  496, 1174,  494, 1176,  504, 358,  472, 336,  502, 1166,  502, 334,  494, 340,  500, 1168,  500, 336,  504, 358,  472, 336,  502, 358,  470, 364,  476, 332,  496, 364,  474, 360,  468, 366,  472, 362,  478, 358,  472, 334,  504, 356,  472, 362,  476, 358,  470, 364,  474, 360,  470, 1172,  496, 366,  474, 334,  494, 1174,  504, 330,  498, 336,  502, 1166,  502, 1168,  500, 334,  496, 340,  498, 362,  476, 332,  498, 336,  502, 332,  496, 1172,  496, 338,  502, 1168,  500, 1170,  498, 336,  504, 358,  470, 364,  474, 360,  470, 364,  474, 362,  468, 340,  498, 360,  468, 366,  472, 362,  476, 332,  496, 364,  476, 332,  496, 366,  476, 332,  496, 338,  500, 360,  468, 366,  472, 336,  504, 356,  472, 364,  476, 358,  470, 364,  476, 358,  470, 364,  474, 360,  468, 366,  474, 360,  468, 340,  500, 362,  468, 366,  472, 362,  476, 358,  470, 364,  474, 332,  498, 364,  476, 332,  496, 366,  474, 360,  468, 366,  472, 362,  476, 330,  498, 1198,  472, 1172,  496, 1174,  496, 338,  500, 362,  478, 1164,  504, 330,  498, 364,  476, 360,  468, 366,  474};  // TCL112AC
uint8_t state[14] = {0x23, 0xCB, 0x26, 0x01, 0x00, 0x24, 0x03, 0x0D, 0x00, 0x00, 0x00, 0x00, 0xC0, 0x09};
th: Off, Light: On, Turbo: Off, Swing(H): Off, Swing(V): Offuint16_t rawData[227] = {3134, 1586,  496, 1174,  504, 1166,  502, 334,  494, 366,  474, 362,  476, 1166,  502, 332,  496, 340,  498, 1170,  498, 1172,  496, 366,  474, 1194,  474, 336,  502, 358,  470, 1174,  506, 1164,  504, 332,  496, 1174,  494, 1176,  504, 358,  472, 336,  502, 1166,  502, 334,  494, 340,  500, 1168,  500, 336,  504, 358,  472, 336,  502, 358,  470, 364,  476, 332,  496, 364,  474, 360,  468, 366,  472, 362,  478, 358,  472, 334,  504, 356,  472, 362,  476, 358,  470, 364,  474, 360,  470, 1172,  496, 366,  474, 334,  494, 1174,  504, 330,  498, 336,  502, 1166,  502, 1168,  500, 334,  496, 340,  498, 362,  476, 332,  498, 336,  502, 332,  496, 1172,  496, 338,  502, 1168,  500, 1170,  498, 336,  504, 358,  470, 364,  474, 360,  470, 364,  474, 362,  468, 340,  498, 360,  468, 366,  472, 362,  476, 332,  496, 364,  476, 332,  496, 366,  476, 332,  496, 338,  500, 360,  468, 366,  472, 336,  504, 356,  472, 364,  476, 358,  470, 364,  476, 358,  470, 364,  474, 360,  468, 366,  474, 360,  468, 340,  500, 362,  468, 366,  472, 362,  476, 358,  470, 364,  474, 332,  498, 364,  476, 332,  496, 366,  474, 360,  468, 366,  472, 362,  476, 330,  498, 1198,  472, 1172,  496, 1174,  496, 338,  500, 362,  478, 1164,  504, 330,  498, 364,  476, 360,  468, 366,  474};  // TCL112ACuint8_t state[14] = {0x23, 0xCB, 0x26, 0x01, 0x00, 0x24, 0x03, 0x0D, 0x00, 0x00, 0x00, 0x00, 0xC0, 0x09};
Protocol  : TCL112ACCode      : 0x23CB26010020030D00000000C106 (112 Bits)Mesg Desc.: Power: Off, Mode: 3 (Cool), Temp: 18C, Fan: 0 (Auto), Econo: Off, Health: Off, Light: On, Turbo: Off, Swing(H): Off, Swing(V): Offuint16_t rawData[227] = {3138, 1582,  502, 1170,  498, 1170,  498, 364,  476, 360,  468, 366,  472, 1196,  472, 364,  476, 360,  468, 1200,  478, 1166,  502, 332,  496, 1200,  468, 368,  472, 362,  476, 1192,  476, 1166,  502, 362,  478, 1192,  476, 1168,  500, 362,  478, 356,  472, 1196,  472, 364,  474, 360,  468, 1200,  478, 330,  498, 364,  476, 360,  468, 364,  474, 360,  468, 366,  472, 362,  476, 358,  470, 364,  474, 360,  468, 364,  474, 360,  468, 366,  472, 362,  476, 358,  470, 364,  474, 360,  468, 364,  474, 360,  468, 366,  472, 1194,  474, 362,  476, 358,  470, 1198,  470, 1174,  504, 358,  470, 364,  474, 360,  468, 366,  474, 362,  478, 358,  470, 1196,  472, 366,  474, 1194,  474, 1170,  498, 364,  476, 360,  468, 366,  474, 360,  468, 368,  472, 362,  476, 358,  470, 366,  474, 362,  466, 366,  474, 362,  468, 366,  472, 362,  476, 360,  470, 364,  474, 360,  468, 366,  474, 362,  478, 356,  472, 364,  476, 358,  470, 366,  474, 362,  478, 356,  472, 366,  474, 358,  470, 364,  474, 360,  468, 366,  472, 362,  478, 356,  472, 362,  476, 358,  470, 364,  474, 360,  468, 366,  474, 1196,  474, 336,  504, 358,  472, 364,  474, 360,  468, 366,  472, 1194,  474, 1170,  498, 364,  476, 1194,  474, 1170,  498, 362,  476, 360,  470, 366,  474, 362,  476, 358,  472};  // TCL112ACuint8_t state[14] = {0x23, 0xCB, 0x26, 0x01, 0x00, 0x20, 0x03, 0x0D, 0x00, 0x00, 0x00, 0x00, 0xC1, 0x06};

TDR with the NanoVNA

The performance of my satellite downlink station is nowhere near as good as I would like. I been doing some various troubleshooting measures to try and figure out what is going on. In my previous post, you can see some of my explorations about antenna tuning. In this post I took my first pass at exploring the feedline situation. It’s been up for at least five years and wanted to make sure nothing untoward had happened to it.

To help measure it, I ran a TDR test on the feedline. This is not a perfect measurement as I left the antenna attached, which messes up some things. But it gives you the opportunity to measure where impedance changes occur in the feedline. Here’s what the resulting plot looks like:

At first I was terribly confused thinking my feedline was terribly broken. It’s important to note that the current NanoVNA-Saver software doesn’t present TDR the way most people expect. It stacks the impedances of the various segments, so you shouldn’t read that big flat segment in the middle as being at 90 ohms.

Breaking it down by segment I get the following: short stub that adapts SMA to UHF connector; 10 feet of LMR 400 equivalent; a big impedance jump as the signal travels through a 3 inch barrel connector through a door; 30 feet of LMR 400 outside; another barrel connector; 10 more feet of LMR 400; then the antenna.

What did I learn in the end? It looks like my various lengths of cable are fine. Most importantly I learned that UHF connectors have a surge impedance of around 35 ohms instead of the 50 ohms we are looking for. That causes reflections and increases the return loss of your feedline. Wikipedia article on the impedance of UHF connectors: https://en.wikipedia.org/wiki/UHF_connector

So where does that leave me?

This showed I didn’t have any problems in my feed lines other than the ones I caused myself by using UHF connectors and patching together shorter lengths of cable. My next steps to evaluate the system performance will be to measure the SWR and return loss of just the feedline without the antenna connected. This means I’ll replace the antenna with a 50 ohm calibration standard to measure return loss; then replace it with a dead short to measure insertion loss. More detail here: https://www.tek.com/blog/improving-vna-measurement-accuracy-quality-cables-and-adapters

I’ve been having a strong desire to replace every UHF connector with a Type N connector. These measurements are an attempt for me to quantify what, if any, practical improvement I would see from that change. I fear that my real problem is that I live in a city and there is just far too much noise around me.

I cut it twice and it’s still too short!

For my satellite ground station I built an antenna from the plans by WA5VJB http://www.wa5vjb.com/references/Cheap%20Antennas-LEOs.pdf

I originally built a 7 element 70cm antenna. But then added on a two element 2m one. While I had tuned the 70cm antenna, it was tuned a little too low after the second cut to shorten the driven element. But it passed, just barely.

While attaching the new antenna to the old, I got frustrated smacking the darn thing into everything and hacked off the end, converting it to a 5 element in short order. With years of experience under my belt I have learned that anger is not a great engineering design philosophy.

I didn’t check the tuning before mounting it up here.

Lets take a look at what the NanoVNA Says. This is the return loss chart for 2 m frequency.

While the tuning is a little low for what I want, it’s close enough to work. For example most satellites are around 145 MHz.

Unfortunately this is awful. I need tuning closer to 435 MHz, this bad boy is coming in at 415 MHz. Looks like it’s time to take out the plumbers torch and solder some more wire back onto the antenna :).

abandoning Pi-Hole for cloudflared

I’ve been using Pi-Hole for a while and it just caused too many problems. Many shopping carts across the web would just fail. I need to run google ad campaigns and you can’t get to the admin UI. Therefore I decided to move dns resolving back to my ER-X and instead use cloudflared to resolve DNS queries so CenturyLink has a harder time selling my browsing history.

I found a great post on how to do this at: https://reichley.tech/dns-over-https-edgerouter-x/ but it doesn’t cover the v2 series of EdgeOS based on Debian 9. These are my quick notes on changes to their directions.

When you install a new update of EdgeOS, it overwrites all the default partitions such as /usr. Therefore I decided to store my files in /config/user-data which is an area that persists between system updates.
On edgeos:
mkdir /config/user-data/cloudflared
On machine used to upload:
scp cloudflared user@erx:/config/user-data/cloudflared

I also decided to store a copy of config.yml in this directory before copying it over to /etc/cloudflared/config.yml. That way after an upgrade I have less work to do.

EdgeOS v2 uses systemd instead of init.d for startup.

sudo cp /config/user-data/cloudflared/config.yml /etc/cloudflared/yml
sudo /config/user-data/cloudflared/cloudflared service install
sudo vi /etc/systemd/system/multi-user.target.wants/cloudflared.service  

Modify line to include pid info (not sure if we need this with systemd)

ExecStart=/config/user-data/cloudflared/cloudflared --config /etc/cloudflared/config.yml --origincert /etc/cloudflared/cert.pem --pidfile /var/run/$name.pid --no-autoupdate 
sudo systemctl enable cloudflared.service
sudo systemctl start cloudflared.service
systemctl status cloudflared.service  

Also /usr/sbin wasn’t in my path, so I had to call tcpdump directly.
/usr/sbin/tcpdump -nXi eth0 port 443 and dst host 1.1.1.1

After an upgrade I should only need to do re-run a few of the steps above.

Satellite Fun

I’ve been slowly working towards building a satellite uplink and downlink station for years. I have finally reached a point where I finished building the station and have Receive some signals.

The performance is rather abysmal, I still have a fair amount of work to do.

Causal Inference Resources

I was inspired by the post “Why you should stop worrying about deep learning and deepen your understanding of causality instead” to write up some of the resources I’ve used over the past year as I myself have tried to learn more about causality.
The field of Causal Inference has become much more rich and interesting over the past 20 years as a number of new statistical tools were created to help improve the bias inherent in model dependent statistical inference. I find it’s best to start with understanding the split between prediction and causal inference that has been in the field for quite a while. Each of the following three references goes into much more detail about how many of the same tools are used between causal inference and prediction, but the meaning assigned to the model, and in particular how you evaluate the model for appropriateness is very different depending on what you’re trying to do.
 
Statistical Modeling: The Two Cultures : http://projecteuclid.org/download/pdf_1/euclid.ss/1009213726
 
My team and I spent a lot of time dealing with observational data. Therefore much of my focus has been about how to make better decisions when dealing with observational data and quasiexperimental study design. There’s been a lot of research in this area because so many medical studies are based on observational data. The Evidence Based Medicine movement came out of a desire to improve clinical decision-making outcomes and provides many ideas that can be reused within my own field. One of the pieces that is fantastic for decision-making in general, is the hierarchy of evidence. This provides a framework within which to base your decision making and understand how biased your study could possibly be.
 
One of the articles I really enjoyed coming across was by Rubin: “For objective causal inference, design trumps analysis”. In it he briefly covers the counterfactual framework, and reworks an observational study through the lens of experimental design, using the appropriate tools to approximate a true experiment to the best of his ability. It definitely gave me a much better understanding about the role of treatment assignment and how it participates and causal inference.
 
And now onto books!
 
The first book is particularly awesome and mathy. I find that it hops right in and covers the key concepts you need to understand about modern causal inference theory. That is both a strength, and weakness. If you’re not up to date on reading mathematical notation, it can be a little challenging.

“Counterfactuals and Causal Inference” by Morgan and Winship

This was the first book I got. I actually had the first edition, and upgraded to the second edition when it came out, definitely worth it. I found many of the topics more approachable in this book than the previous book, but they restrict the set of tools they give you. Therefore I found it a great place to start and become comfortable with counterfactual theory and causal diagrams, but I eventually had to upgrade to the book out of the Harvard school of public health.
 
Many papers you encounter will refer back to the work in this book, which is largely a compendium of the research done by Rubin. I found it an additional perspective to many of the concepts covered in the previous two books. So probably not required, but nice to round things out.
 
This book showed me how little I really knew. It was the last one I purchased and I still haven’t finished it. I really need to sit down and compare the contents of this textbook against the second half (Model Dependent Causal Inference) of the Causal Inference book out of Harvard.
 
OK. This book hasn’t shipped, and I haven’t read it. But I’m very excited by it. Judea Pearl’s other book: “Causality: Models, Reasoning and Inference” is well-regarded, but also known to be very difficult as it connects together causal reasoning in several different fields into one overarching framework. He also has a blog we can stay up-to-date on some of the latest books and research in this area: http://causality.cs.ucla.edu/blog/index.php/2016/02/12/winter-greeting-from-the-ucla-causality-blog-2/ .
 
Lastly, one of the early papers I encountered that I felt did a good job in this area: Sekhon, J. S. (2011). Multivariate and propensity score matching software with automated balance optimization: The Matching package for R. Journal of Statistical Software 42(7). http://www.jstatsoft.org/v42/i07 . I found his package rather straightforward to use and high enough performance to work against the large data sets I deal with on a regular basis.
 
If you’re ever in the Seattle area and want to chat about these things, I would love to do coffee.
 
–chris
 
 
 

Date Based Cohort Analysis for Adobe SiteCatalyst using R

Over the years I’ve generally avoided Excel. Being a programmer, I could just pick up python and write code to do what I needed, I didn’t need to hack something together in Excel. But I always ended up back there for the charting.

Then I learned R and have even more reason to avoid Excel.

Recently I needed to implement date based cohorts in SiteCatalyst. While there are a few blog posts on how to do this in Excel using Report Builder (http://adam.webanalyticsdemystified.com/2013/03/07/conducting-cohort-analysis-with-adobe-sitecatalyst/ , http://blogs.adobe.com/digitalmarketing/mobile/what-is-mobile-cohort-analysis-and-why-should-you-use-it/) they didn’t work for me. My team is all on MacOS, and Report Builder isn’t.

In this example I’m going to use events tracked by the Mobile Library lifecycle stats. One plus of this solution is it doesn’t require any SAINT classifiers to convert mobileinstalldate to a month/year.

The idea here is you use QueueTrended to chunk together uniqueusers by month, with mobileinstalldate as the counted event. If you look at the data output from QueueTrended is makes more sense. The rest is then using plyr and reshape2 to beat the data into the form we want. It works just fine with segments.

I’m not sharing my code that generates percentages yet because I’m not particularly happy with it yet. Drop me a line if you are interested.

And yes, the data is small, this is from a private unreleased product I am working on.

The Lean Startup Movement from a Decision Science perspective

First off, my apologies to actual Decision Scientists. I have no formal training and just recently learned that the area I’m fascinated by actually has a name.

There are a lot of anecdotes out there about how wonderful all the different Lean Startup methodologies are. If you go and read the Amazon book reviews, you’ll see lots of comments about how it changed someones life. 

What you won’t see is any data showing they actually help.

I’m a skeptic at heart. Studies on businesses are notoriously difficult to do, and while I could see value in the tools and techniques, I really wanted a deeper scientific basis for all the hype.

I finally managed to find it within the Decision Science literature. Daniel Kahneman has done a great survey of key ideas and concepts in “Thinking, Fast and Slow”. It’s a long book, but for the purposes of this discussion we really care about part 3: Overconfidence. “The Signal and the Noise” by Nate Silver also talks a fair amount about our inability to do forecasts well. Lastly, “Naked Statistics” provides another view.

The Lean Startup Methodologies (LSM) are designed to help you answer two questions:
1) Should I even bother building a full product?
2) If I do build the product, should I continue with incremental changes, or pitch the whole thing and start over?

As an entrepreneur with a product or business idea you are going be subject to three different cognitive illusions. These are things that are common to all of humanity, and there is little you can do to train them away. The three illusions I see as being the most problematic are Optimism Bias, Domain Expertise Confidence; Confidence in Prediction in an unstable environment.

Optimism Bias
Humans in general tend to think they perform better then average. In particular, entrepreneurs tend to be even more optimistic then the general population about their ability to beat the odds. For instance the base rate of failure for a new business is 65%. But the vast bulk of entrepreneurs consider their chance of failure at 40%. This delusion can help us make it through the day, but it can also cause you to stick with something for far too long. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 256). Farrar, Straus and Giroux. Kindle Edition. )     

Domain Expert Overconfidence
Everyone is over-confident about their ability to predict the future. Doesn’t matter how much of an expert you are in your field, you are overconfident. Time and time again we see research that shows a crappy mathematical model that is informed by expert opinion is almost always better than either one alone. (http://www.rff.org/Events/Documents/Lin_Bier.pdf, all of “The Signal and the Noise”).

The second part here is apropos to a problem I’m wrangling with at work. I’m working on a product that is all about market creation. By definition, my ability to research and learn about my market is terribly limited since the market doesn’t exist. Learning the Lean Startup tools improves your metacognitive ability to see your own weakness in expertise and therefore adapt to it. (http://gagne.homedns.org/~tgagne/contrib/unskilled.html)

Confidence of Prediction
When you ask yourself “Do I need this feature in my product?”, the question you are really asking is “Will adding this feature to my product add enough value to my business in the future to justify the (opportunity) cost now?”. That is attempting to forecast the future, something humans can be good or bad at depending on how quickly they get feedback about their decisions. Firefighters and nurses, who get almost instantaneous feedback on the quality of their forecasting are able to build a great amount of skill in this area. Those of us running in longer time scales fall prey to building confidence in our predictions, but not actually improving our accuracy. Think of how much that sucks for a moment. You can be in a field for years, making forecasts, assuming you are getting better, but in reality, you aren’t. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 240). Farrar, Straus and Giroux. Kindle Edition.)

Essentially it all boils down to the fact that you are going to feel really confident about your idea and plans, but that confidence is not based on actual hard data. It is instead an illusion being fed on how we perceive and think about the world. You can’t use your gut ‘Confidence’ check to know if you really have a product or not at the get go, we’re just not wired that way.

But by acknowledging our limitations, we can figure out ways to work around them.

So how do you deal with this trifecta of Overconfidence in your idea?

To quote Steve Blank: “GET OUT OF THE BUILDING” and go talk to customers.

That’s it.

All the different LSM out there have different suggestions on how to go about defining a market segment, finding customers, and how to talk to them. But it boils down to talking directly to customers to counteract your overconfidence. They all reccomend a progressive approach in how you do your research. You start with wide open investigational interviews, and as you learn more (and validate or invalidate your overconfident beliefs) you start using more structured interviews to gather more accurate data (but less wide ranging) including things like surveys and prototypes. You can even run A/B value prop testing with a mock web site and google adwords.

That said, you need to be cautious about over-generalizing your results. We humans love to see patterns and over value things we see and measure, even if they have low confidence.
 (Patrick Leach (2006-09-15). Why Can’t You Just Give Me The Number? (Kindle Locations 1911). Probabilistic. Kindle Edition. )

Launching – Answering the “Persevere or Pivot” question

These pre-launch experiments have a whole range of cost, accuracy, and specificity. On the low end you have informal unstructured interviews. This is great for proving things our early on, and also allowing you to find a better business idea then the one you thought of originally. On the high end of cost and complexity, you have large scale polling that you do right (i.e. random distribution of customers, proper question wording to avoid bias, etc). These aren’t going to find you that better business idea, but can provide a very accurate and specific answer to the question you pose. 

At some point though (and this will be very specific to you and your circumstances) you’ll need to sit back and decide to stop pre-launch experments. When do you stop? When failing in market will be better then continuing to run experiments. For example, if you have a large existing business (lets say $100M in annual revenue) that you are thinking of disrupting with a new revenue model, you probably want to do the more formal methods since spending $50,000 on a polling firm and taking the time is small relative to the risk. But a 6 person startup with no actual revenue yet? Depends on the size of your market. If you only have 100 customers, you may want to do more upfront work because running experiments on customers can cause them to get grumpy and leave. But if you have a larger target market, it’s fine to lose some customers while you work things out. If you are looking for a more decision science based approach, see Chapters 11 and 12 in Patrick Leach (2006-09-15). Why Can’t You Just Give Me The Number?. The different market sizes account for the different approaches in the LSM books.

Once you get a product in market, you are still subject to the same overconfidence illusions around forecasting. This is where the second part of the LSM stuff kicks in: Analytics and Rapid iteration.

I’m totally thrilled to watch market after market get disrupted by rapid prototyping. On the hardware side we had FPGA’s come along in the 90’s that allowed really interesting products to be built without the capital outlay needed for an ASIC. On the SaaS side, AWS/DevOps/Harware as software movement has added nimbless to that field. Outside of computing, the revolution around rapid prototyping, 3D printing, and cheap CNC tools (like CNC plywood routers) has drastically changed things. Even the repatriation of hard goods manufacturing is occuring because it allows businesses to iterate faster (http://www.nytimes.com/2011/10/13/business/smallbusiness/bringing-manufacturing-back-to-the-united-states.html?pagewanted=all ). 

How can overconfidence get you after launch? Go read the opening chapters of “The Startup Owners Manual” to learn about how Webvan’s overconfidence caused them to ignore the metrics they were getting and fail big.

The steps at this point are:
1) Ship iteration of business (this includes ad copy, market segment, marketing webite and materials, actual product)
2) Observe behavior using quantitative metrics
3) Use that to drive qualitative discussions with customers
4) Make a hypothesis and modify product/web site/ad copy
5) Repeat

It’s easy to get analytics wrong. Eric Ries labeled these ‘Vanity Metrics’. These are metrics that are pretty much guaranteed to give you the answer you want (generally up and to the right). But much like qualitative interviews, there is a broad spectrum of accuracy and complexity around implementation. For that first launch you don’t need much. Just a retention chart that is keyed off the activity that drives your engine of growth is enough to shake your confidence. You are looking for analytics that help you detect the huge problems in your overconfident assumptions. You aren’t at the point where you care about 3% improvement in a number or running a linear regression on your data.

Don’t know what metrics to track? Grab a copy of Lean Analytics (http://leananalyticsbook.com/). They breakdown a large number of different business models and what you should be looking at to decide if you should throw in the towel or not.

How quick should your iterations be? As quick as possible without pissing off your customers or partners. For instance, if you are growing rapidly you should iterate quickly (daily even?). As a portion of your customer base, those people irritated by all the change will always be a shrinking proportion of your total base since you are getting new customers at a very fast rate. I personally (overconfidentaly and untested of course) think you need to be willing to lose your early customers and therefore shouldn’t worry about them.

One last moment of reflection. These are all really cool tools. But if you’ve already decided on a course of action, the value of any new information may be zero since it will not change your mind. In that case, just make your decision and go on. I like these tools (in particular stochastic modeling), but in all honesty, if you crack open Bayesian theory and run the numbers, they only help increase your odds of a good outcome by a small amount. This is due to the huge amount of raw luck and chance that exists in the world. A lot of this is outside of our scope of control (I feel for everyone who launched a new business right before the great recession).

So have fun and enjoy yourself!

Books:
“The Lean Startup”
http://theleanstartup.com/ 

“Lean Analytics”
http://leananalyticsbook.com/ 

“Thinking, Fast and Slow”
http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 

“Naked Statistics” 
http://www.amazon.com/Naked-Statistics-Stripping-Dread-Data/dp/0393071952 

“The Signal and the Noise”
http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X 

“Why Can’t You Just Give Me The Number? …Guide to using Probabilistic Thinking to Manage Risk and to Make Better Decisions”
http://www.amazon.com/Guide-Probabilistic-Thinking-Decisions-ebook/dp/B0029F2STA 

“The Startup Owners Manual”
http://www.amazon.com/The-Startup-Owners-Manual-Step-By-Step/dp/0984999302