Sunday, February 5, 2012

On a Jew's View of Muslims in Europe

I have written about this in other forums, but now that I'm maintaining a blog, I thought I would add a post on the topic here.  A friend recently forwarded an article that was purportedly published in a Spanish newspaper (although apparently not) with the thesis that European guilt over the Holocaust has led to a too lenient treatment of Muslim immigrants and that effectively trading Muslims for Jews has been a horrible choice.

Here is my response (slightly modified).

First, tripe about English schools not teaching the holocaust, is just that. It is well debunked and only dedicated racists are still using it.

Second. the description of the Jewish community that was destroyed in the Holocaust is simply wrong. Yes, there was a vibrant and cultured Jewish community in Germany and other Western European countries that was destroyed.  They were educated at fine German universities and produced the kind of brilliant thinkers like Einstein and Freud that German universities of the era produced. And this was a tragedy.  But they were a small minority of Jews killed in the Holocaust. Most were villagers from eastern Europe. There were more Jews from Poland killed in the Holocaust than Jews from the rest of Europe combined. These communities were not cultured, they did not produce brilliant thinkers.  And their destruction was every bit as tragic.  Because the murder of a poor subsistence farmer is every bit as tragic as the murder of brilliant scientist when done simply because the victim has a particular race or religion.

Like all bigots, the Nazis rewrote history to justify their ideology.  And that is just what this author does.

Lumping of all Muslims into a single collective enemy who seeks the destruction of Europe is exactly what the Germans said.  Oh wait, that was the Jews who were bent on the destruction of Europe.  The Jews who were cultural poison.  I won't claim to know much about Islamic immigrant communities in Europe, but I can speak to the ones here.  They have been surveyed extensively and, survey says, they are much like other American immigrants.  Hardworking, patriotic, grateful for the opportunities that America provides. Before 9/11 most were Republicans.  The typical Muslim immigrant in America is no more extremist than my grandfather was.

Are there Islamic extremists?  Obviously.  Saudi Arabia seems to have a national system of education to produce them.  There are also Jewish extremists, or have you missed the growing communities of Haredi in Israel.  There are Christian extremists too, think Randall Terry or Fred Phelps.  To generalize from the small minority of crazy extremists to whole communities is the definition of bigotry.

In all likelihood, it is true that the Islamic immigrant communities in Europe are poor and dirty. That describes immigrant communities throughout history and across the globe. It is not usually the rich and well educated in society who immigrate, it is the poor and uneducated. We idealize it now, but the lower east side was described in its time as hell on earth and the worst place in the world.  And the Jews (and others) who populated were despised by the good citizens of the city.  They were excoriated as anarchists and terrorists, as dirty and uneducated, as trash.  Immigration laws were rewritten to keep them out.  There is nothing said about Muslim immigrants in Europe today that was not said about my great grandparents and their generation in America.

And if Muslim immigrants get help from the government, so to did the Jewish children of those times who escaped the horrors of the lower east side through the beneficence of a government that invested in public education not just for primary school but also the excellent system of higher education, the City Colleges of New York.

It is quite simple, the author argues for racism and bigotry. The kind of racism and bigotry that has ostracized Jews in America and around the world and resulted in the murder of 6 million Jews in Europe.  As a Jew, I find the use of the Jewish experience to justify bigotry (and, let's face it, racism) to be a particularly odious obscenity. It literally spits on the history of my people and my ancestors and I couldn't find it more offensive. Islam isn't the enemy, bigotry and extremism are.

Tuesday, January 24, 2012

On Why Scala is The Perl of JVM Languages

One of my first technical jobs was to develop a system in APL.  There are two lessons I still carry from that experience: one, I can write any program with fewer characters and two, that doesn't make it a better program.   The mark of a great programming language is not the number of characters you can write a program in (see: APL) but how well the language allows the program author communicate with program readers.  Is the meaning of sentences and paragraphs of the language obvious or do I have to "work it out."

Unfortunately, Scala (and most other modern JVM languages) seems to have learned all the wrong lessons from Perl.  When I hear programmers make fun of Perl, I understand that they are really making fun of all those unreadable Perl programs they've had the misfortune to have to have read.  You didn't have to write programs that way in Perl (and I didn't) but it sure seemed like there were too many $s, @s, bless this, and optional parameters that defaulted to $_ and just what did that evaluate to in the middle of a map.

And Scala does all of that.  Implicit conversions.  Methods that use symbols for method names.  A type system that used + and -, :< and >: to differentiate covariant and contravariant type and upper and lower type bounds and who even understands what those things are.  Semicolons are optional except when they're not.  Braces are optional except where they are not and sometimes they become parenthesis.  And too many nifty ideas like case classes that don't work with the rest of the language.

Look, we can all agree that Java suck.  I learned object oriented programming in Smalltalk.  Java really sucks.  And collection classes require functional programming techniques to really work well.  I am certain that you can write code in Scala that is more elegant and more expressive than Java.  And I am equally certain that you can write code in Scala that makes the typical CPAN library seem like a beacon of clarity.  Scala may be a better choice than Java, but that doesn't make it a good language.

And I like programming in Perl.

Sunday, January 22, 2012

On Why is George Karl Still Coaching?

The Nuggets won a close one last night in New York.  Playing their fifth game in seven nights and having arrived at 4:00 am, the team was clearly exhausted in the fourth quarter and overtime.  This was made worse by the short bench of Coach Karl, he played only seven players.  In the back court, this was forced by injuries.  But in the front court, this was a result of inexplicable decisions by the coach.  While Timofay Mozgov had one of his better games, in what universe is he a better player than Chris Anderson who Karl is increasingly reluctant to play?  Here are the per/48 stats:

Raw Stats
MinWP48WinsPTSDRBORBREBASTTOBLKSTLPF
Andersen204.2361.016.09.94.514.40.21.23.52.46.4
Mozgov284.0240.115.28.82.911.72.03.02.50.76.3


And here are the shooting efficiency stats:

Shooting Efficiency
FG%2FG%3FG%FT%eFG%TS%FGA3FGAPPSFTA
Andersen51.2%51.2%0.0%66.7%51.2%57.8%10.10.01.588.5
Mozgov52.1%52.1%0.0%73.7%52.1%55.3%12.30.01.233.2


By the stats, Anderson is not just a better player, he is a MUCH better player.  And he can't get off the bench in a game where the team is playing tired?  A game where they were giving up second and third attempts because they were too tired to rebound.  Really?

The Nuggets are an interesting team this year, a team of a bunch of good but not great players.  The only clearly bad player on the team is Mozgov who is a Doug Moe big stiff.  He's 7'1" and can't rebound.  This team will go only as far as George Karl's personal decisions allow, which is apparently not very far at all.

Tuesday, December 6, 2011

On The Existence of Job Creators

Job creators do not exist.  Jobs result from a functioning employment market. Full employment simply means that the market is clearing.  If your business goes out of business, another will form in its place and employ your displaced workers.  When the market clears, there are no special people who can create more jobs and clearing is a function of the condition of the market.

There are systemic factors that may prevent the market from clearing.  Conservatives would point to the minimum wage and business regulation.  Liberals would look at aggregate demand.  And both would consider the money supply.  But markets never fail because there aren't enough people willing to try to make money by hiring others

At best, the people who claim to be job creators could be productivity enhancers.  Companies succeed, at least in part, by improving the productivity of their employees.  And, improving the productivity of employees makes their labor worth more which eventually results in rising wages.  Judging by the trajectory of the wages of American workers over the past 20 years, it would appear unlikely that we are fostering many productivity enhancers these days either. 

Tuesday, November 15, 2011

On Why Are We Testing Anyway

The paradox of testing is, if testing has such limited value why do companies that produce great software do so much of it?  If testing can't really improve quality and doesn't really provide data to decision makers, why do we bother? What is the value of testing?

In my experience, testing is best understood as a finishing activity, a polishing activity.  It is the last few coats of clear paint that make a paint job shine.  It is running your hand across the surface of a finished product to find any remaining small burrs that need to be sanded over.  It is the attention to detail that differentiates value from junk.

This understanding of the purpose of testing unlocks the current trends and debates in the test community.  It explains why the traditional practice of testing as a planned, designed, and scripted activity is being replaced by rapid testing techniques.  If you have lots of the kinds of bugs that are uncovered by traditional "systemic" testing techniques, then testing can do nothing for you.  But, if you have done your development well, the kind of intelligent investigation that characterizes exploratory testing can uncover the problems you have missed.

Is testing dead?  If your business model does not require polished products, it may be.  When Alberto Savoia or James Whittaker talk about how testing can be replaced by pretotyping, dogfooding, or beta testing, they are really just saying that Google's business model doesn't require polished products.  For Google, it is more important to get customer feedback on the software's functionality than it is to get the software just right.  And that may be true for some software on the web just as it is surely not for software that, for example, is embedded in a device.

This does suggest that as companies begin to understand the real value of testing, they will have fewer but better testers. And this explains why outsourcing testing has not produced the expected benefits.  If you are trying to replace an intelligent exploratory tester with a phalanx of testers for whom testing is the script and nothing but the script, you get little value from it.

Testing, at least the kind of system level testing done by testers after development is "done," is part of the zen in the art of software development.  It is the attention to detail that differentiates software that users love from software that merely meets needs.



Friday, November 4, 2011

On Testing Will Not Solve Your Quality Problems

I remember a conversation that I had with my Linear Algebra professor when I was in college about his class.  He said it was a classic example of a double humped class.  For half the class, he could teach the material twice as fast and it wouldn't be a problem.  For the other half of the class, however, he could spend twice the amount of time and they still wouldn't succeed.

In economics, they refer to this as multiple equilibria.  I believe that software quality has two equilibria.  We have understood for a decades now the types of practices that are needed to develop working software.  Whether it takes the form of code reviews, pair programming, or a gatekeeping committer, every line of code needs to be seen by multiple pairs of eyes.  Whether by convention or test driven means, you need need comprehensive unit tests.  Since code evolves, you need automated tests that cover the code to tell you when changes have had unexpected effects.

Creating working software requires the disciplined and diligent use of these techniques.  It requires both intention and effort.  It is, as a result, subject to Broken Windows effects.  When you start to loosen your practices, you send the message that quality is not a goal and create an environment where the number of quality problems begins to escalate.

When managers try to solve quality problems by doing more testing, they are really trying to avoid the cost of reaching the "working software" equilibria.  Unfortunately, the same lack of attention to quality that created the problems, sabotages our efforts to fix them as well.  Trying to take the cheap way out reinforces the social norm that quality is not really that important for developers.  We are simply unable realize a specific level of quality by dialing up or down our quality practices.  And testing alone will not enable us to reach the working software equilibria.

Testing will not solve your quality problems.



Wednesday, November 2, 2011

On The Limited Value of Software Testing

In a recent article, Michael Bolton discussed how testing problems like too few testers, too little time for testing, or unstable test builds are really test results.  And, while I agree with him, this crystallized a question that I’ve been dancing around for some time now. I’ve been involved with software testing for almost 30 years and the problems Michael mentioned are the same ones we were complaining about 30 years ago.  How can this be?  Its not as if we don’t know how to do better. 


Unlike Dilbert, I can't simply lay all problems at the feet of pointy haired bosses.  If the same result happens year after year, across companies, across industries, and across countries, there must be some deeper principle at work.  Economics, another of my interests, suggests that that the answer lies in looking at how scarce resources are allocated to produce value. If we consistently fail to apply resources to an activity, perhaps it doesn't really have the value we think it does.



What is the value of software testing? As I have previously discussed, testing has many meanings.  While some of these are pretty thought provoking, I believe that the people who hire testers expect their primary value to come from executing the software with the goal of finding bugs. (That many of the alternate meanings appear to have been created by testers to convince management that they provide value in other ways may well prove my point.). A lot of testing is done by a lot of people but I'm talking about the kind of testing done by testers. Testing at the system level with the goal of finding bugs. Testing that is traditionally done at the end of the release cycle although agile groups do a better job distributing it throughout the cycle.


And what is the value of finding bugs? In theory, finding bugs should allow us to deliver better products to our customers or at least make better decisions about those products. In practice, however, we fail to accomplish these goals and even when we do, it is generally not using the bugs we find from testing.  And if we don't really benefit from the effort spent finding bugs, perhaps managers are acting rationally, if not always consciously, by not investing more in testing.


How is it that finding bugs fails to result in better products or better decisions? Here lies the heart of the issue.
First, I hope that we all understand that you can't find all the bugs. There is simply no such thing as complete testing.  We build complex systems that interact with complex environments. You can't even find all the interesting bugs. To believe otherwise is hubris. The resource constrained environments in which we work force us to make decisions about the testing that we won't do and the bugs that we won't find. If you are in the software business, you are, at least in part, in the business of delivering bugs to your customers.


Your ability to find bugs is even effected by the quality of the software you're testing . When the software is buggy, testing is hard. Builds fail. You start to explore one failure and get sidetracked on a completely different failure. The time spent investigating and reporting bugs eats into the time that was supposed to be spent testing. Even worse, software bugs have the strange property that the more bugs you find, the more bugs remain to be found. Buggy software ends up being tested less effectively simply because it it buggy software.


Finding the bugs doesn't actually improve the software, you also have to fix them. And the kinds of teams that write buggy software are the kinds of teams that have problems fixing them. Finishing the functionality that didn't get completed on schedule takes precedence over fixing bugs. Deeper design and architectural problems are patched over because they would take too long to fix. As the release date nears and the number of bugs mounts, the time spent debugging gets condensed making fixes less likely to work. Eventually, the whole process comes to a halt as we give up and deliver the remaining bugs to our customers.

You can't get working software by starting with buggy software and removing the bugs one by one.

In the end, the quality of the software after testing is much the same as the quality of the software before testing started. Yes, some bugs get fixed before being found by customers and that is a good thing. This explains why we invest in testing at all. But software that was buggy when we started testing is still buggy. when we finish. Good software needs less testing and the testing finds fewer bugs that need to be fixed. For buggy software, testing is less effective and the kinds of teams that write buggy software are the kinds of teams that can't get the bugs fixed. Whether you find only a small number of bugs or you can't fix the ones you do, the value of finding bugs is limited.


The lack of real improvement in the quality of software as a result testing was recognized long ago. So testers changed the goal. Instead of making the software better, testing would allow managers to make better decisions about whether and when the software would be released. Unfortunately, it turns out that decisions aren't really made this way. Personal and organizational considerations trump data in any complex corporate decision. Particularly for the types of organizations that produce software that should be shelved or substantially delayed because of quality problems.


In the typical organization, testing happens near the end of the release cycle. By that time, expectations about the delivery of the software have been set. Managers are rewarded for meeting commitments and making dates. As the testing progresses, the cost to the decision maker of missing the date escalates.   Loss aversion and the sunk cost fallacy kick in. It becomes almost impossible to choose not to deliver the software at least not without significant personal risk. Even significant delay has a cost. The inertia of a release date makes it really hard to miss. Its easier to just make the date and plan the maintenance release. And there are significantly fewer consequences.


The whole notion that testing can tell you when software is ready to be released is flawed. It depends on software improving through the testing process and, as a result, being able to project when it will reach some level of quality that we can release.  However, software doesn’t really get better through the process of testing.  Buggy software never reaches an improved level of quality.  The question can’t be when software will reach the level of quality we want to release, but whether we are willing to release the software with the level of quality it has.

We could increase the value of the bugs we find by using them to improve how we develop software. That is usually how teams learn to develop good software. This is one of the key advantages of effective agile teams. It is my experience, however, that the quality of the software directly reflects the well-being of the team that creates it. Teams that create lots of bugs turn out to be the kinds of teams that are unable to learn from them.

In the final analysis, the value we get from finding bugs is limited. Teams that create good software get good value from the bugs they find by removing them and learning from them. But, since there are few bugs to be found, the cost of finding them is expensive and the value is limited. Teams that create buggy software face the opposite problem. Its easy (and cheap) to find bugs, but these kinds of teams are incapable of using them effectively. So the value is limited.


Spending more money on testing won't change that. If you want to deliver good software, you have to write good software. Yes, testers contribute to the quality of the software in more ways than just finding bugs. But, it turns out that finding bugs has only a limited impact on quality. If I asked you to invest in improving the quality of your software, you would almost certainly get better results by improving how you write the software not how you test it.


Tom Gilb calls testing "as a last, desperate attempt to assure quality." Viewed this way, we can reconcile why organizations both invest and under-invest in testing. We can set expectations that are appropriate and can be met. And we can develop ourselves and our profession to best meet those expectations.