It’s been a while since I’ve blogged. I’ve been so busy with other people’s websites it’s been hard to keep track of which projects I’ve been supposed to work on, much less get the work done, or much lesser bill for the work I’ve actually finished. Arg.
So I haven’t had time to write about the play I just acted in. I’ll try to get to that later.
I haven’t had time to write about the photo workshop I taught in Westport. Ditto.
I haven’t had time to write about the election or other political happenings. There’s plenty o’ folks doin’ that.
I haven’t had time to either take or write about interesting photo/hiking trips. Gotta do something about that one.
But for now, a little note about a recent theme in the world of Other People’s Websites.
Three of my recent projects involve taking over a site created by someone else. All three of them involve some fairly sophisticated PHP scripting. There aren’t many PHP developers here on the Mendocino Coast, so local people with a need for such things consistently manage to find me.
It’s an interesting task to reverse-engineer one of these beasts. I start by slogging through the site the end user sees, the PHP scripts that generate it, the style sheet, the graphics, the database, and the other elements which come together to make the site. Once I start to figure out how the thing works, I can do the updates, re-do the visual design, improve the search engine optimization (SEO), or whatever else needs to be done.
Sometimes this experience is educational. I find PHP functions I didn’t know about, or different ways to protect a contact form from injection attacks, or ways to accomplish a task more efficiently than I otherwise would have done. But often, I’m baffled as to what the original designer/developer was thinking. Other times it’s clear the d/d was pretty much clueless about an important aspect of the task.
Not to dump on the other d/d guy, but to educate others, I present a brief case study in SEO. I recently started working on the Troll Mother site. The visual design was fine. The content was adequate. But the site was an SEO trainwreck. Each page had the same title. The pages were generated by a script which used the “GET method” to identify the particular page. The GET Method uses URLs with a question mark to pass data to the scripts, so you wind up with a URL like this: http://www.trollmother.com?page=trolls.
It’s a broadly held consensus in the SEO world that the GET Method is bad practice. In this case, it didn’t keep the googlebot from indexing the pages, but it may well have kept other bots out, or hurt the rankings. I converted the site to “clean” URLs, so that the same page is accessed with this URL: http://www.trollmother.com/index.php/trolls. I modified the scripts so they can take the page info from either method, but once you’re in the site, all the links use the clean URLs.
I also set up a simple database table to keep track of page titles and descriptions for the various pages. Now each page has its own title with page-specific keywords in it.
Then, it’s on to the links. Link exchanges with my other clients should help punch up the pagerank. We’ll see how much this helps over the next few weeks as the site and links are re-indexed, and troll fans find the site in greater numbers.
There’s more info about my approach to SEO in my essay about the Three Cs of Search Engine Optimization.