JENS MALMGREN I create, that is my hobby.

Porting my blog for the second time, walk the posts

This is post #3 of my series about how I port this blog from Blogengine.NET 2.5 ASPX on a Windows Server 2003 to a Linux Ubuntu server, Apache2, MySQL and PHP. A so called LAMP. The introduction to this project can be found in this blog post /post/Porting-my-blog-for-the-second-time-Project-can-start.

In my previous post I got started loading the categories from the old blog into a database of the new blog. One issue so far was that I could have duplicate categories. To solve this I wanted the name field of the category table to be unique.

mysql≻ drop table category;
Query OK, 0 rows affected (0.09 sec)

mysql≻ create table category ( ID int auto_increment
            primary key, name varchar(80) not null unique);
Query OK, 0 rows affected (0.08 sec)

mysql≻ desc category;
| Field | Type        | Null | Key | Default | Extra          |
| ID    | int(11)     | NO   | PRI | NULL    | auto_increment |
| name  | varchar(80) | NO   | UNI | NULL    |                |
2 rows in set (0.00 sec)

That worked really well but now I got another problem. I would like to be able to run as many times as I like. Now I added a constraint that there should be only one category ’1980’. Now when I run my program it gives me an error:

DBD::mysql::db do failed:
Duplicate entry ’1980’ for key ’name’ at /usr/local/bin/ line 20.

This is what I want to happen but on the other hand later on in my Perl script I need to know what GUID is connected to what ID. How about generating everything from scratch in one go, create one single Perl program that can be run as many times as I want. If needed it should remove previous versions of tables and generate new so that in case I change my mind about for example how the table should be designed then I can change that in the program and run the script again. In the end there will be one single perl program that can port an entire Blogengine.NET xml files directory to a MySQL database with all the things I need. Hopefully.

So, how do I find out if a table already exists in the database? After a bit of searching I found a way of listing all available tables in the database and how to remove them. With this code here below I can run the program as many times as I want.

my $dbinfo = $dbh-≻table_info();
while( my($qualifier,$owner,$name,$type,$remarks) = $dbinfo-≻fetchrow_array())
	foreach ($qualifier,$owner,$name,$type,$remarks)
	  $_ = ’’ unless defined $_;
	if ($type ne "TABLE")
	print "Removing table: $name
	$dbh-≻do(’DROP TABLE ’ . $name);

print "Creating table: category
$dbh -≻ do(’create table category
  ( ID int auto_increment primary key, name varchar(80) not null unique )’);

This snippet is the beginning of my conversion program. It removes als tables and creates them again. Then later when I need more tables I add more lines with create table.

Now it is about time to concentrate on the posts. They are stored in xml files with as filename of a GUID and extension xml. I decided to pick one of these and learn how to load the file in such a way that I can pick out the parts and later insert these into the database. It is little bit like starting all over so I commented away all earlier parts of my Perl program and then I picked one of these post xml files and that file I will open. When I can open the file properly then I will come back to entering it into the database etc.

I was born 1967 in Stockholm, Sweden. I grew up in the small village Vågdalen in north Sweden. 1989 I moved to Umeå to study Computer Science at University of Umeå. 1995 I moved to the Netherlands where I live in Almere not far from Amsterdam.

Here on this site I let you see my creations.

I create, that is my hobby.