Archive for November, 2008

not smart enough to understand

Tuesday, November 18th, 2008

I’m fascinated by the wingnuts’ reactions to their recent electoral drubbing. There’s nothing quite so entartaining as the sight of Republican cannibalism. It’s wholesome entertainment for the whole family.

Sometimes the right wingers will write something so completely bizzarre that it occurs to me that maybe they’re right, after all. Maybe I’m just too unbelievably stupid to understand. If only I had a functional nueron or two I could comprehend their profound wisdom.

I just visitted one of my favorite comedy sites, townhall.com. In one essay, that tower of intellectual insight, David Limbaugh, shares his ideas on the future of the Republican Party. In it he writes a paragraph that’s so far over my head it might as well be written in Sanskrit:

Traditionalists don’t oppose this or that “high-minded” plan aimed at delivering security (e.g., health care) or prosperity (e.g., direct transfer payments from producers to nonproducers) because they don’t want more people to be prosperous but because they do and because they cherish freedom. We know that socialism never works and always results in less prosperity, on top of its obvious freedom-stripping inevitabilities.

The freedom-stripping inevitabilities of access to health care are so completely obvious that we don’t even need to discuss what they might be. It’s just dumbass hippie Communist degenerates like me who are puzzled by this, I’m sure.

tooo busy

Thursday, November 6th, 2008

It’s been a while since I’ve blogged. I’ve been so busy with other people’s websites it’s been hard to keep track of which projects I’ve been supposed to work on, much less get the work done, or much lesser bill for the work I’ve actually finished. Arg.

So I haven’t had time to write about the play I just acted in. I’ll try to get to that later.

I haven’t had time to write about the photo workshop I taught in Westport. Ditto.

I haven’t had time to write about the election or other political happenings. There’s plenty o’ folks doin’ that.

I haven’t had time to either take or write about interesting photo/hiking trips. Gotta do something about that one.

But for now, a little note about a recent theme in the world of Other People’s Websites.

Three of my recent projects involve taking over a site created by someone else. All three of them involve some fairly sophisticated PHP scripting. There aren’t many PHP developers here on the Mendocino Coast, so local people with a need for such things consistently manage to find me.

It’s an interesting task to reverse-engineer one of these beasts. I start by slogging through the site the end user sees, the PHP scripts that generate it, the style sheet, the graphics, the database, and the other elements which come together to make the site. Once I start to figure out how the thing works, I can do the updates, re-do the visual design, improve the search engine optimization (SEO), or whatever else needs to be done.

Sometimes this experience is educational. I find PHP functions I didn’t know about, or different ways to protect a contact form from injection attacks, or ways to accomplish a task more efficiently than I otherwise would have done. But often, I’m baffled as to what the original designer/developer was thinking. Other times it’s clear the d/d was pretty much clueless about an important aspect of the task.

Not to dump on the other d/d guy, but to educate others, I present a brief case study in SEO. I recently started working on the Troll Mother site. The visual design was fine. The content was adequate. But the site was an SEO trainwreck. Each page had the same title. The pages were generated by a script which used the “GET method” to identify the particular page. The GET Method uses URLs with a question mark to pass data to the scripts, so you wind up with a URL like this: http://www.trollmother.com?page=trolls.

It’s a broadly held consensus in the SEO world that the GET Method is bad practice. In this case, it didn’t keep the googlebot from indexing the pages, but it may well have kept other bots out, or hurt the rankings. I converted the site to “clean” URLs, so that the same page is accessed with this URL: http://www.trollmother.com/index.php/trolls. I modified the scripts so they can take the page info from either method, but once you’re in the site, all the links use the clean URLs.

I also set up a simple database table to keep track of page titles and descriptions for the various pages. Now each page has its own title with page-specific keywords in it.

Then, it’s on to the links. Link exchanges with my other clients should help punch up the pagerank. We’ll see how much this helps over the next few weeks as the site and links are re-indexed, and troll fans find the site in greater numbers.

There’s more info about my approach to SEO in my essay about the Three Cs of Search Engine Optimization.

 


This site is protected by Comment SPAM Wiper.