Latest Entries »

What are these plants?

Several years ago the City of Olympia planted some trees along our street, and these were immediately attacked by the low-life vandals who walk through the neighborhood from time to time, heading from the downtown bars to the Section 8 housing located up the street from us.  The new trees were barely a week in the ground when these idiots (don’t know who they were, but I know their kind, at least) deliberately broke many if not most of them in half.  City crews came out and tried to fix them, but the vandals had done their work too well, and the tops of these trees eventually died.  Sickening.

But, the trees themselves did not die, and they continue to grow.  It hadn’t occurred to me to wonder what kind of trees these were until a couple of days ago when I happened to notice that there were two varieties.  The ones growing on the north side of the street have sort-of spiky light green leaves and produce copious quantities of red-orangish berries.  The ones growing on the south side have more rounded but dark green leaves and produce copious quantities of dark red berries.  This piqued my curiosity.  Neither produce tasty berries — they aren’t too bitter, but they are astringent, and the birds don’t seem interesting in eating them.  So what are they?

A query via email produced an interesting reply.  The responder wasn’t sure of the exact species, without a photo at least, but suggested they were either Serviceberries or Mountain Ash.  A quick look at Wikipedia showed that these were no way Mountain Ash (or Rowan) — the leaves are completely different, and so are the berries.  But Wikipedia makes it pretty certain these are some species of Serviceberry, or Amelanchier.  But which two species?  At least some of these species are edible, but neither of the two types of tree on our street are producing fruit that I would call edible.

Well, I took a photo of the fruit and some leaves from both species. Here:

Serviceberries

What species of Serviceberry are these?

Any ideas?  I’m sending a link to this post to the City Arborist, and if I get a response, I will update accordingly.

Fluency

It’s been quite a while since I’ve bought and read any new science-fiction by any but a select circle of writers.  My favorites have been Jerry Pournelle (who doesn’t write much any more, unfortunately), Lois McMaster Bujold, David Sherman, and Dan Cragg (who has completely retired).  But completely by chance I happened upon a new writer, Jennifer Foehner Wells, who has a first novel out: Fluency.

Now, not everyone with a first novel is a Tom Clancy, whose first, The Hunt for Red October, was an instant classic.  But Wells’s book is in that league, I feel.  Think I’m exaggerating?  I will admit to only being slightly less than halfway through the book, but I am finding it a very worthy read, and hard to put down.  Since I have it on Kindle, this means I can carry it around and read it any time, so this isn’t a bad thing.  But this book is fascinating, and I am looking forward to its conclusion!  I understand Jen is working on a sequel already, and this is good news.

Fluency is a “first contact” novel, meaning that it is about first human contact with an extraterrestrial race.  And it’s near future as well.  Pretty much current technology, so it isn’t hard to relate to.  Let me repeat the first part of the story synopsis:

NASA discovered the alien ship lurking in the asteroid belt in the 1960s. They kept the Target under intense surveillance for decades, letting the public believe they were exploring the solar system, while they worked feverishly to refine the technology needed to reach it.

The ship itself remained silent, drifting.

Dr. Jane Holloway is content documenting nearly-extinct languages and had never contemplated becoming an astronaut. But when NASA recruits her to join a team of military scientists for an expedition to the Target, it’s an adventure she can’t refuse.

The ship isn’t vacant, as they presumed.

I am so glad the author didn’t choose to start the book on earth, and painstakingly delve into the assembly of the crew, and all the technical details. She gets pretty much right into it, and leaves whatever background information for a few quick flashbacks — which do not at all detract from the plot. The crew is wonderfully human, and not a bunch of perfect jocks (like you expect astronauts might be). They have realistic characters, and are developed pretty much “just right”.

I recommend this book for those of my readers who are looking for a good SF read.  Buy it Here.

Rediscovering Yahoo!

Recently I needed a new email account that was not Gmail.  Note that I have about six or seven (lost count) Gmail accounts.  One for each of several purposes.  But now I needed one with my Ham Radio callsign, and it’s only got five letters.  Five letters account names are too small for Gmail (why?), so Gmail was out.  The criteria for the new email ISP was: Free; and not a startup that would be gone next week.  And I thought I would try Yahoo!

I used to use Yahoo! for web searches (a loooonnnggg time ago), but haven’t been there is literally years.  So I got my new email account and had a quick look at the overall site.  I was surprised!

Yahoo! was actually INTERESTING!  There was news, there was entertainment, and there were videos.  I am not sure if it’s doing YouTube-type crowd-sourced videos, but there they were.  I guess times change things.

And yes, apparently you can still search the web on Yahoo!

Nice.

I was browsing StackOverflow.com today, and ran into this question on doing transactions in Sql Server.  Since I was familiar with using SqlTransactions, I thought maybe it was a question I could provide an answer to, but then I saw the response by Anders Abel about TransactionScope.  Wow!  I had never known this existed and it seemed like a much better way of doing transactional operations without getting into SqlTransaction.

However, Remus Rusanu, commented on Anders’ answer, however, suggesting that this wasn’t the best way to use TransactionScope.  He pointed to a blog post by a Microsoftie that gave serious caution to using it “straight up” without some modification.  I had a look at the article and I was really excited about using TransactionScope with that technique.  I can think of a few places it might have saved me some grief in the past.  And I’m posting a link to the MSDN article, “using new TransactionScope() Considered Harmful” here, mainly for my personal future use, but YOU, dear reader, might find this valuable too.  So here it is:

using new TransactionScope() Considered Harmful by David Baxter Browne

Enjoy coding!

Phil Plait, he of the Bad Astronomy blog (it’s on Slate these days, so you know he’s for real), is a fun read.  He reports all kinds of interesting astronomical stuff, and I enjoy reading his work.  On some days, however, he insists upon breaking wind about non-astronomical matters, and in many of these blog entries he gets to be quite annoying.  Can you say: specialist trying to comment outside his area of specialization?  That’s Phil Plait from time to time.

Global Warming Deniers

Phil gets really exercised about people who won’t take global climate change seriously.  He seems to regard them as the moral equivalent of Holocaust Deniers.  A recent post of his concerning old arctic Ice melting is a case in point.

Phil has posted about his Denier fixation before, and while I definitely apprecaite his frustration (I have to deal with people who can believe the craziest things, too), what does he expect to accomplish?  He writes:

We don’t know how long it will be before we see our first ice-free Arctic summer, but it may be as soon as 30 years. Most likely it will be somewhat longer; I hope so. But the bottom line is that the ice is going away due to global warming, and as it does we’ll see worse and worse effects from it. The time to stick our heads in the sand about this is long, long gone.

OK, let’s assume that climate change is real, and that we’re warming up.  Further, let’s assume that the warming is caused by human activities (something that I am still not convinced of).  What about it?  More to the point, what do we do about it?

Enact the Kyoto Accords?  Everyone stop breathing? Kill all the cows?

Well, cattle are responsible for enormous amounts of methane, an even better greenhouse gas than carbon dioxide, so they have it coming.  Clearly.  And lest PETA put out a contract on me, I’m joking.

When was the last time something like this happened?  It might have happened during the so-called Medieval Warm Period (MWP), a period lasting three hundred years from 950-1250 AD.  I have been unable to determine if the warming effects included an ice-free Arctic Ocean or not, and what I’ve read suggests that the warming was not global.  I guess some scientists prefer to call this period the Medieval Climatic Anomaly. And apparently during this time the southern hemisphere was experiencing other effects than increased warmth – Antarctica was colder than today, for example, and the tropical Pacific was cooler and drier.

And then there’s the Roman Warm Period!  This proposed period, RWP, is less well attested than the MWP.  But the point is, this period from 250 BC to 400 AD was another time in which global climate got frisky, it seems, much the same way during the MWP.  And in neither one of these periods were the Vikings or the Romans driving SUVs or selling carbon credits to the Huns.

So, Who’s to Blame?

Phil Plait is one of many would-be Cassandras who is desparately trying to get our attention about how Global Climate Change is going to Kill Us All.  And ironically enough, from the viewpoint of the Cassandra Chorus, we have met the enemy and He is Us (with apologies to Walt Kelly).  I’m not convinced, of course.  In this game of correlation and causation, who would I prefer to blame?  The cows, of course.  There are now more cows upon the face of the earth than there have ever been before upon the face of the earth.  If you can’t see the obvious correlation there isn’t much I can do for you.

But joking aside, and quite frankly I don’t give a darn if we humans are driving the climate change or not, look at this chart of global temperatures over the last 12,000 years (the chart from the Wikipedia article on the Holocene Climatic Optimum) :

What do you see there?  Notice that modern times is on the right side of the chart (the right edge is 2000 AD).  You see the humps of the MWP and RWP, ocurring at around 1,000 and 2,000 years ago respectively?  You see the right edge of the chart show temps about in the same neighborhood as the MWP and RWP?  And further, that as we move off the chart to the right, the temperature line goes up to near 0.5 degrees.  Now, that’s hot, but notice that it’s only just a little hotter than it was 8,000 years ago!  In short, we’ve been here before.  Is it perhaps too early to panic?  Well, perhaps not, since temperatures now are hotter still — off the chart to the right, in fact, they’ve popped up dramatically to over 0.5 degrees (see the note there for year 2004?).

But I do wish to have you consider the entire chart.

See how the temperatures fall very dramatically off as we go backwards pass 10,000 years?  That, my friends, is the last glaciation period.  It’s warmed up since then, yes?  But I want you to lay a ruler, figuratively, along the middle of that squiggly line starting at around 8,000 years ago where the temperature line crosses 0 degrees, and end up at about the midpoint of the upward trending squiggle, at about -0.25 degrees.  What slope do you see on your ruler?  That’s right!  Downwards!

In other words, up until just very recently, since the end of the last glacial period, 8,000 years ago when we were at a peak in temperatures, we’ve been trending colder, not hotter

And here’s something you may not be aware of: we are not yet out of the last Ice Age.  I can hear your eyelids snap open in surprise at this, so I will repeat myself: we are not yet out of the last Ice Age!  We are presently in a period known as the Quaternary Glaciation, which started 2.6 million years ago and hasn’t ended yet.  The only reason why you’re not sitting on a huge pile of ice reading this is because we happen to be in what is called an Interglacial Period.  This one even has a name: the Holocene Interglacial, “Holocene” being a fancy scientific name for “modern times”, in case you’re wondering.  And do you know what the paleoclimatologists were doing in 1972?  They were worried that we were heading out of the current interglacial and into the next stage of glaciation.  Or, in other words, they were worried about global cooling.  There were at the time several articles in popular magazines reporting on this worry; can you remember back that far?  The climatologists felt that since interglacial periods tended to last about 10,000 years, and we were 10,000 years into the current interglacial, things were about to start getting seriously colder — and the data on the chart above tended to bear them out.  Since then they have changed their minds, however.  The increase in atmospheric CO2 has given good cause to believe that the trend is reversing (and recent temperature trending confirms this).  In short, while we could have been heading out of the current interglacial period and into some serious ice, this won’t happen after all because — wait for it — we’ve been dumping enough CO2 and other greenhouse gases into the atmosphere to hold off the next glaciation period another 15,000 years!  And if we can just get the greenhouse gas level up to twice what it is now, we will delay the next glacial period up to 60,000 years!  Yay!  Does this mean that I have now changed my mind about the climate change being human-caused?  No, I’m still not convinced, but I will allow the possibility.  We aren’t the only thing blowing out greenhouse gas — I am convinced the situation is very complicated.

Now, what would you prefer: being dumped into the deep freeze, or being warm and toasty?  I don’t know what course others might prefer, but give me warm summers and a mild winter.

You see what this means, of course.  We need more cows.

I was just scanning StackOverflow looking for interesting Questions and Answers, and happened up this one: Converting JSON to C# Class Object. Serialization is always an interesting subject, and I had previously thought about the idea of serializing (or deserializing) to and from JSON, so I had a look. The questioner was having a problem getting JSON serialization to work, and asked if anyone could see what he was doing wrong. Well, I certainly couldn’t see anything (since I’ve never dealt much with JSON — that’s “JavaScript Object Notation” for those of you wondering about it — it was Greek to me).

But what caught my eye the most was a comment to the question. See, the questioner’s displayed code contained data names like this one: “strMapPolicyID”. Note the “str” prefix. This is one way to indicate that the object in question contains a string. Or “intZoomLevel”, where “int” indicates that it contains an integer. This kind of notation is what is called “Hungarian Notation”, and in some (most?) programmer’s minds it is a thing to be avoided like the plague. The comment in this case was:

Don’t use Hungarian notation. It’s bad practice generally, and totally unexpected when working with JSON. – Panagiotis Kanavos

This annoyed me. There’s nothing wrong with Hungarian notation, per se, any more than there’s anything wrong with Hungarian. In it’s proper place (amongst Hungarians), the Hungarian language is perfectly appropriate. And Hungarian notation has its place, as well. I don’t use the style of Hungarian Notation expressed in the StackOverflow question (I used to, long time ago), but the inventor of the notation, at-that-time Microsofty Charles Simonyi (who is, by the way, Hungarian), didn’t encourage this version of the notation that most programmers are familiar with anyway. This idea of prefacing each variable name with a short indicator of the object’s underlying data type (such as “str” or “int”) is a tautology, and was not what Simonyi had in mind.

One should read Simonyi’s original article on the subect, on Microsoft’s Developer Network site: http://msdn.microsoft.com/en-us/library/aa260976(v=vs.60).aspx

The kind of prefix that Simonyi argued for was the functional prefix, not the data type. For example, if an integer was intended to represent a color, specifically the color red, the variable might be named coRed. Or, if it was supposed to represent the color of a house, coHouse. I don’t have time to provide a precis of Simonyi’s article — just read it.

The simple fact about Hungarian notation is this: what most programmers think is Hungarian notion isn’t. They’re abusing a straw man.

I still use Hungarian notation for names of controls in web pages and windows forms. For example, I might name a textbox txtFirstName, or txtAddress1. What this does is enable me make sure I am addressing all related objects. On the other hand, if I had a lot of textboxes, I might group them in nominal groups, such as a set of textboxes intended to collect or display address information. For example: addrStreet1, addrStreet2, addrCity, addrState, and addrZip. The fact that addrState might be a dropdown containing all 50 states is not important, but it IS important that it enables me to more easily code to them all because auto-completion (Intellisense in Visual Studio) would help me make sure I got them all during coding to access them. And that is the proper role of Hungarian notation — not to mention the proper definition of Hungarian notation.

By the way, there are a couple of blog posts from famous programmers on the subject that bear linking to:

And, as always, have fun coding!

This blog generally deals with technical topics only, but I am going to make an exception in this case. I guess I have that privilege — I make the rules here, after all!

I’ve been working on a side project for the last six or seven years, something completely unconnected with information technology. And after a lot of time, research and just plain work, it has finally come to fruition. I’ve been working on two books, getting them ready for publication.

My wife’s family is German. My mother- and father-in-law were from Memel, Germany, which is now Klaipeda, Lithuania. At the end of the war in Europe they had been living in Osterode in East Prussia (now Ostroda, Poland). As the Red Army invaded East Prussia in January 1945, the family (consisting of Dad, Mom, and four little girls aged 4 through 11) fled via rail toward Elbing, near the Baltic Coast. They got about halfway before a train wreck blocked further progress, and then there were on foot, walking near the rail line in the dead of a cold, cold winter, until they reached the town of Prussian Holland, which is where the Red Army caught up with them. They took shelter in an abandoned house.

They were able to stay in Prussian Holland while the Red Army flowed through on the way towards Berlin, until the military occupation authorities declared that all German adults must register. Mom and Dad left the kids in the house (a different one, because they had been forced to move), and duly went to report. They never returned, leaving the four little girls to fend for themselves in the burned out town. Both the father and mother were taken separately into the Soviet Union, where the Soviets took advantage of the Yalta Agreement provision permitting them to use “German labor” in rebuilding.

The two books track what happened to the little girls and their mother over the next four years, To this day, nobody knows what happened to the Dad. He never returned, which was the fate of nearly half of Germans taken into the Soviet Union to do forced labor. And over 500,000 were so taken. This is not including any prisoners of war taken during or after battle. And these half-million were all civilians, mostly women and old men.

The books themselves are the first person accounts of one of my sisters-in-law, and my mother-in-law. My SiL wrote her own book, titled Yesterday’s Sandhills, and I have edited it extensively over the past seven years (I’ve actually rewritten it five times, but decided in the end to go back and stick closer to her own version). My MiL’s account was actually originally a transcript (in German) of a tape recording of her telling her entire tale from the moment they left Osterode on the train, until she finally returned to East Berlin. My SiL provided my wife and I with the typed transcript, and we translated it into rough English. I have spent the last two years working to turn it into a viable story that could be made into a book.  We’ve given this book the title The Bones of My People.

I am so relieved to finally be able to say: I’M DONE!!!! During the past week we have finally gotten both books into print, using Amazon’s CreateSpace, and both books are now available in both print and Kindle editions. Actually, the second one isn’t quite available on Amazon (in a few days it will be), but I can offer it on my CreateSpace eMarket already. It’s in the pipeline, though.

While I am happy to say I’M DONE, I’m not really done, of course. Since this is self-published (and we formed an actual publishing company with an EIN and all that), we still have to market the books or nobody will ever see them.

I am going to go to bed tonight and I will not set the alarm clock. I am sleeping in tomorrow, Saturday. I’ve been staying up into the wee hours for the last 2 weeks getting all this finalized and I declare that it is now vacation time. I am going to take a Saturday off. For the first time in months.

BothCovers

Available from Prospect Avenue Books!

I got started building apps for Windows Phone with the idea that maybe I’d be able to make some money at it. Keep dreaming, as they say…

In any case, I wasn’t building games, but more-or-less practical apps, and that, combined with the slow startup of Windows Phone, meant that I wasn’t getting much traction in the Marketplace. My practical apps were somewhat marginally practical — who really needs a Fraction Calculator, anyway? Or a Ham Radio License Test Practice app? Very nice, err.. niche.

But last year I got a surprise from Microsoft: $207! It took well over a year, but I finally saw some cash. And to my surprise (having let my developer account lapse due to the less-than-stellar income), I’m getting another approx. $200 this month! Wow! Won’t let me retire early, but apparently “interest” in my apps is steady, if slow. Maybe I should develop another few apps? Perhaps I will. After I get a couple of books published, maybe.

Thanks Microsoft for remembering me!

I am currently working on a project with a deadline in my day job, and I am having fun with it. Well, mostly. There was this little problem I was having that was beginning to make me quite frustrated. And you know that frustration occasionally results in a computer programmer putting his or her fist through the monitor display, right? Well, I guess that’s a favorite meme; nobody really does this, right? Gosh, I hope not. It wouldn’t be so dangerous these days, what with LED monitors and all, but in earlier ages when we had glass monitor screens we risked serious cuts and bruises.

I was trying to enforce some encapsulation by working with two different VS2012 solutions at the same time. Since I was planning to use a particular set of class libraries in multiple related projects, I wanted to make them into common components. No sense reinventing wheels. So here I was with the common component libraries in one solution and the application using those libraries in the other. The user application had references to the DLLs of the class library, and to the point, the references pointed to the Release folders of the class libraries. Since I was developing both pretty much at the some time (though the class libraries started life and got their major form prior to the user application), I would occasionally find I needed to go back to the class libraries to make tweaks and corrections. Then I would test the user application to see how well the changes worked. The user application was being neatly used as a test bed for the common libraries — meaning the next time I use them, which will be in a follow-on project this week, they will have been largely debugged. Two birds with one stone.

This worked out mostly OK, but I started noticing that sometimes my class library changes weren’t taking effect. I’d make changes, rebuild the libraries, and the changes wouldn’t take effect! But then again, sometimes they would! I was nearing the peak frustration level, with my dual monitors beginning to cringe in anticipation of the inevitable shattering blow from my Fist of Steel®, when suddenly it popped in there as to what was going on.

Aaarrrgggghhhh!

Here I was, rebuilding my libraries with the new changes, and failing to rebuild my user application! Of course the changes weren’t taking effect! The user application was never referring to the DLLs in the Release folders of the class libraries, it was referring to the copies of the DLLs it had placed in its own bin folder! And since I had not rebuilt the user application to pull the class libarary changes into the user application’s bin folver, OF COURSE it continued to use the old versions of the libaries.

In a certain sense I was creating my very own Hell. A versioning Hell. Complete with red-tinged, tailed demons wielding pitchforks. Jabbing me into a towering frustration! But realization is power, and so the demons are now banished to the nether regions. And all is now blue-sky with chirping birds again.

Gosh, I hope I remember this next time.

Jenna Wortham of the New York Times seems to think that new technology should be cheap enough for poor people to afford it. My first question is: Why?

Headlines are not normally created by the journalist, but are apportioned by editors who sometimes miss the point. Jenna’s article, which in the NYT has the headline “More Tech Magic, if You Can Afford It“, in my local newspaper is headlined “Tech magic comes at a price out of reach for many”. But in this case, the editors in question did get the point Ms. Wortham intended.

While writing about Google Glass, she reports handing her demo device back to the Google employee “…with the sinking feeling that it could be a while before I’d be that close to them again.”

And why would she have such a feeling? She writes:

That’s because they cost $1,500, and they are being made available largely to developers and people who are eager to figure out how to build applications for them.

Her entire article rants on and on about how wonderful the new technology is and how terrible it is that poor people won’t be able to afford it. Until the price does down. And even when the price does go down, how terrible it will be that the poor won’t be able to afford the newest versions, since the price for the latest and greatest will be higher than state-of-the-art.

But what the heck is she complaining about? It’s not fair! That’s what. “…it would be a shame if the only people who participate in this leap forward are those who can afford it.” Bat puckey. Cannot “the poor” wait five minutes until they CAN afford it? Apparently this is a tragedy. She agrees with Anil Dash, an entrepreneur and blogger who raised similar concerns last year in a post titled “You Can’t Start the Revolution From the Country Club.” She nods sagely in agreement with Dash’s pathetic premise, and except for the obvious fact that this premise is completely false, we might be tempted to go along with it, too. You can take it to the bank that technological revolutions almost always start from the so-called “country club”. The fact is simply this: if the “country club” doesn’t adopt the new technology at its inception, then nobody else is getting it either. It turns light outside when the sun comes up. No sooner. Deal with it. Just because you don’t like something doesn’t give you an entitlement to have it otherwise.

And fairness? What is it about certain people that they think that being poor gives one rights superior to the better off, who have to work for what they get, too? Oh, sure, you work at a dead-end job, a job that you qualified yourself for by spending your time goofing off instead of getting an education that would make it possible for you have a better job. Therefore you are entitled to the same choices in life as someone who put business before pleasure and prepared for the future?

Please spare me. I can sympathize with someone affected by breaks that resulted from circumstances truly beyond their control, but most people are poor because of choices they themselves made, or are in situations that they could, with some effort, work themselves out of. Ms. Wortham’s own experience shows clearly that being poor is, with some effort, only a temporary condition. She says she couldn’t afford that iPhone because she was working in a poorly-paying job. But now she can afford one. Does she think she’s the exception?

I’ve been poor, and I’ve been moderately well-off. I knew precisely why it was that way at every point. There was no-one to blame but myself, and I would have felt ashamed of myself for blaming anyone else but myself, and I would have felt like a fool for asking for extraordinary consideration for my condition. “Hey, Steve Jobs, I can’t afford that nice new iPhone because I’ve gotten myself into overwhelming debt due to wanting crap I couldn’t afford, so you must sell me one for one-third it’s retail value (or better yet gimme one for free). That would be fairer, wouldn’t it?”

Maybe my desire for an iPhone (or a Windows Phone) should lead me to better my condition so I CAN afford one.

Follow

Get every new post delivered to your Inbox.