Archive for November, 2005

Focusing my development effort

Thursday, November 24th, 2005

Long time readers of my blog already know about my tendency to get carried away with stuff. I’ve got carried away with something in the past, just to have to retract the following day. The second post mostly deals with this tendency to get carried away. To sum up: I don’t think the lesson I need to learn is “refrain more”, as that takes away a lot of the energy as well – “learn to acknowledge my mistakes happily and as early as possible” seems a much more valuable lesson for me. And that applies in many other fields.

I’ve also talked about my inability to write short blog posts, and failed miserably to do so almost systematically in the past.

Anyway, to get to the point, this (of course) also applies in my dedication to development. I tend to drift off too easily, especially when the goal involves developing a complex piece of software like NGEDIT. Although I’ve posted in the past about my strategy in the development of NGEDIT, I find that I have to revisit that topic really often – mostly in the messy and hyperactive context of my thoughts, but I thought I’d post about it as it may also apply to other fellow developer-entrepreneurs.

I recently posted about how I had found out the best way to focus my development efforts on NGEDIT. To sum up: try to use it, and implement the features as their need is evident (I’m fortunate enough that I am 100% a future user of my own product). As the first point coming out from that, I found myself working into getting NGEDIT to open a file from the command line. That’s weeks ago, and I have only almost implemented it. How come? It should be simple enough to implement! (At least, given that opening the file through the file-open dialog was already functional).

Well, the thing is that my tendency to drift off, my ambition, and my yearning for beautiful code kicked in. Instead of a simple solution, I found myself implementing the “ultimate” command line (of course). It’s already pretty much fully architected, and about half-working (although opening files from the command line ended up being just a small part of the available functionality). As I did this, I also started refactoring the part of the code that handles file loading into using my C++ string class that doesn’t suck, which is great, but it’s quite an effort by itself. Meanwhile, I found myself whining that I didn’t want to have all that code written using the non-portable Windows API (as a shortcut I took before summer, NGEDIT code is uglily using the Windows API directly in way too many places), so I started implementing an OS-independence layer (I know, I know, these things are better done from day 1, but you sometimes have to take shortcuts and that was one of many cases). Of course, with the OS-independence layer using said generic string class for the interface. And establishing a super-flexible application framework for NGEDIT, which was a bit cluttered to my taste. And sure, I started trying to establish the ultimate error-handling policy, which took me to posting about and researching C++ exceptions and some other fundamental problems of computing…

If that’s not getting carried away, then I don’t know what is!

Today’s conclusion, after going out for a coffee and a walk to the cool air of the winter, is that I should refrain from tackling fundamental problems of computing if I am to have an NGEDIT beta in a few months’ time. The code of NGEDIT 1.0 is bound to have some ugliness to it, and I need to learn to live happily with that. Even if I will have to rewrite some code afterwards, business-wise it doesn’t make sense to have the greatest framework, the most beautiful code, and no product to offer!

In any case, I hope I have improved my ShortPostRank score, even if definitely not among world-class short-post bloggers, and you can see I’ve had some fun with self-linking. Something nice to do after starting beta testing for ViEmu 1.4, which will probably be out later this week.

The lie of C++ exceptions

Thursday, November 17th, 2005

As part of the ongoing work on NGEDIT, I’m now establishing the error management policy. The same way that I’m refactoring the existing code to use my new encoding-independent string management classes, I’m also refactoring it to a more formal error handling policy. Of course, I’m designing along the way.

Probably my most solid program (or, the one on which I felt more confident) was part of a software system I developed for a distribution company about 9 years ago. The system allowed salesmen to connect back to the company headquarters via modem (the internet wasn’t everywhere back then!) and pass on customers’ orders every evening. I developed both the DOS program that ran on their laptops, and the server that ran on AIX. I developed the whole system in C++ – gcc on AIX, I can’t remember what compiler on the DOS side. Lots of portable classes to manage things on both sides. As a goodie, I threw in a little e-mail system to communicate between them and with hq, which was out of spec – and I managed to stay on schedule! It was a once-and-only-once experience, as mostly all my other projects have suffered of delays – but the project I had just done before was so badly underscheduled and underbudgeted that I spent weeks nailing the specs to not fall in the same trap.

The part I felt was most important to keep solid was the server part – salesmen could always redial or retry, as it was an interactive process. The server part was composed of a daemon that served incoming calls on a serial port, and a batch process that was configured to run periodically and export the received files to some internal database system.

How did I do the error management? I thought through every single line in the process, and provided meaningful behavior. Not based on exceptions, mind you. Typical processing would involve sending out a warning to a log file, cleaning up whatever was left (which required its own thinking through), and returning to a well-known state (which was the part that required the most thinking through). I did this for e-v-e-r-y s-i-n-g-l-e high-level statement in the code. This meant: opening a file, reading, writing, closing a file (everyone typically checks file opens, but in such a case I felt a failure in closing a file was important to handle), memory management, all access to the modem, etc…

C++ brought exceptions. I’m not 100% sure yet, but I think exceptions are another lie of C++ (I believe it has many lies which I haven’t found documented anywhere). It promises being able to handle errors with much less effort, and it also promises to allow you to build rock-solid programs.

The deal is that exceptions are just a mechanism, and this mechanism allows you to implement a sensible error handling policy. You need a rock solid policy if you really want to get failproof behavior, and I haven’t seen many examples of such policies. What’s worse, I haven’t yet been able to figure out exactly how it should look like.

Furthermore, exceptions have a runtime cost, but the toughest point is that they force you to write your code in a certain way. All your code has to be written such that if the stack is unwound, stuff gets back automatically to a well-known-state. This means that you need to use the RAII technique: Resource-Acquisition-Is-Initialization. This covers the fact that you have to relinquish the resources you have acquired, such that it doesn’t leak them. But that is only part of returning to a well-known state! If you are doing manipulation of a complex data structure, it’s quite probable that you will need to allocate several chunks of memory, and any one of them may fail. It can be argued that you can allocate all memory in advance and only act if all that memory is actually available – but then, this would force your design around this: either you concentrate resource acquisition in a single place for each complex operation, or you design every single action in your design in two phases – first one to perform all necessary resource acquisition, second one to actually perform the operation.

This reminds me of something… yeah, it is similar what transaction-based databases do. Only elevated to the Nth degree, as a database has a quite regular structure, and your code usually doesn’t. There are collections, collections within collections, external resources accessed through different APIs, caches to other data-structures, etc…

So, I think in order to implement a nice exception-based policy, you have to design a two-phase access to everything – either that, or an undo operation is available. And you better wrap that up as a tentative resource acquisition – which requires a new class with its own name, scope, declaration, etc…

Not to talk about interaction between threads, which elevates this to a whole new level…

For an exceptions-based error-handling policy, I don’t think it is a good design to have and use a simple “void Add()” method to add something a collection. Why? Because if this operation is part of some other larger operation, something else may fail and the addition has to be undone. This means either calling a “Remove()” method, which will turn into explicit error management, or using a “TTentativeAdder” class wrapping it around, so that it can be disguised as a RAII operation. This means any collection should have a “TTentativeAdder” (or, more in line with std C++’s naming conventions, “tentative_adder”).

I don’t see STL containers having something like that. They seem to be exception-aware because they throw when something fails, but that’s the easy part. I would really like to see a failproof system built on top of C++ exceptions.

Code to add something to a container among other things often looks like this:

void function(void)
  //... do potentially failing stuff with RAII techniques ...


  // ... do other potentially failing stuff with more RAII techniques

At first, I thought it should actually look like this:

void function(void)
  //... do potentially failing stuff with RAII techniques ...

  std::vector<item>::tentative_adder add_op(m_vector_whatever, item);

  // ... do other potentially failing stuff with more RAII techniques


But after thinking a bit about this, this wouldn’t work either. The function calling this one may throw after returning, so all the committing should be delayed to a controllably final stage. So we would need a system-wide “commit” policy and a way to interact with it…

The other option I see is to split everything in very well defined chunks that affect only controlled areas of the program’s data, such that each one can be tentatively done safely… which I think requires thinking everything through in as much detail as without exceptions.

The only accesses which can be done normally are those guaranteed to only touch local objects, as those will be destroyed if any exception is thrown (or, if we catch the exception, we can explicitly handle the situation).

And all this is apart from how difficult it is to spot exception-correct code. Anyway, if everything has to be done transaction-like, it should be easier to spot it – suddenly all code would only consist in a sequence of tentatively-performing object constructions, and a policy to commit everything at the “end”, whatever the “end” is in a given program.

I may be missing something, and there is some really good way to write failproof systems based on exceptions – but, to date, I haven’t seen a single example.

I’ll keep trying to think up a good system-wide error handling policy based on exceptions, but for now I’ll keep my explicit management – at least, I can write code without enabling everything to transaction-like processing, and be able to explicitly return stuff to a well-known safe state.

This was my first attempt at a shorter blog entry – and I think I can safely say I failed miserably!

On blogging, payment processing, and the finite nature of time

Wednesday, November 16th, 2005

I think I will stop apologizing for not posting often. I’d love to post frequently, but both software development and business development take so much time.

Some people are able to post almost daily to their blogs. Of course, that depends on each person’s circumstances. If you are setting up a software business, you have sw development to do, setting up and running the business also take a lot of time, so blogging usually comes third after these. Most starting microisv’s (I don’t like the term, but everyone’s using it so why complain) don’t post that much to their blogs (with some notable exceptions which I think are all linked from the sidebar here). What they all do is put a solid amount of work into their products and websites.

For some reason, it takes me much more thought to post on the blog than to post in a forum. Probably because I kind of like to post interesting, well written, content-rich blog entries – and that takes its own time to do. If I allowed myself to post more undigested stuff I would post more often.

As well, when I get into my “writing” mood, I like it and posts grow and grow and grow and…

One other thing is that, when you’re setting up a business, there’s probably information you don’t want to disclose. At least, I still think it’s worthwhile for my business strategy to not disclose some things. Future business opportunities, etc… apart from regular business info – I’ve thought more than once about posting actual ViEmu sales figures, but I think it could be damaging in the long run. I’m sure people are curious. For those curious, it seems it’s actually taking off a bit, although nothing that makes it qualify as a major revenue stream.

More than one person asked about my experience with adwords. Again, I feel I should dig a bit into the logs and post actual stats, rather than just my impression. So I end up posting nothing, due to lack of time for proper research. The summary: they help. There is fraud, but the cost for low-competition keywords such as mine covers for it. How do I know there is clickfraud to my site? Because some hits only ever request the html page – not even the CSS or graphics get requested!

Anyway, I was going to post about payment processing – mainly due to a thread at JoS started by the wondefully informative Andy Brice of PerfectTablePlan fame, a piece of software to solve your reception seating arrangement problems, I’ve been researching into payment methods (yes, the previous link to Andy’s page was designed to help with search engine results, as he’s been so nice sharing so much info and his company seems so serious).

To the point, it seems using paypal to process your payments can help in getting commissions much lower that other services such as share-it, the one I’m currently using. I’ll be looking into setting up paypal for ViEmu, and I’ll report back on how it works.

But there was another piece of advice I wanted to pass.

When I set up my share-it account, it let me choose whether to process my statements in euros or US dollars. Given that I’m euro based, euros seemed more reasonable, but their fees were lower for US dollars. $3 + 5% for accounts in US$, €3 + 5% for accounts in euros. Given that US$3 is cheaper than €3, I chose US$.

Only to find out that currency exchange in monthly wire transfer killed me – charged both from the originating bank and from my end (the receiving bank).

No need to say that I promptly switched to an account in EUR (a non-automatic process that the share-it people solved nicely after requesting, their service being usually quite responsive).

Just so that you don’t make the same mistake.

Anyway, just to recap, I wanted to share my problems to find time to blog and share out some interesting info regarding payment processing. Nothing intersting for hardcore C++ programmers today.

No promises, but I intend to post some time soon (or not too far in the future) about my experiences with:

  • NGEDIT development (with which I’ve been pretty much all time since I released ViEmu 1.3 last week)…
  • … which will include the evolution of the C++ string class that doesn’t suck (but is sucking life out of me)…
  • …product development and release strategy for NGEDIT (I can post really often about this, as I can refine or redesign the strategy so many times before I can release the editor)…
  • …possibly on adwords (if I ever get to dig the weblogs properly)…
  • …web site traffic / marketing (although you can read the meat of the information at this JoS post)…
  • …and too many other issues to name, including open source, the software industry, and the now so popular google bashing, including the many meanings of the word evil

Wish you all nice luck with your own projects.


Monday, November 7th, 2005

See, I had an enlightening experience yesterday. I now know exactly what the best roadmap to follow with NGEDIT is.

I’ve just finished ViEmu 1.3 with the powerful regular expressions and ex command line support. I haven’t been able to release it yet, as Microsoft has changed the way you get the PLK (“package load key”) which you have to include in every new version of your product. It was automated beforehand, so you filled the online form and received the PLK in an e-mail 30 seconds afterwards. But with the release of Visual Studio 2005, they now want to actually approve your product before issuing the key. This means I’m stuck with the code and the whole web info revamp at home. Ah well, I’m a bit angry with that, but then you’re at their mercy – and the important lesson is: I could have filled out forms for ViEmu 1.3, 1.4, 1.5, 1.6, … and even 2.0 last month, and I would have the keys nicely stored in my hard disk. Something to watch out for in the future: anything you use which is a web service is not under your control, remove that dependency if you can.

Anyway, going back to the point, now that ViEmu 1.3 is ready, I am putting ViEmu development to a secondary position, and getting focus back on NGEDIT. ViEmu is now at a more than acceptable level, and although I’ll keep improving it, I better focus on NGEDIT which is the product with most potential.

I’ve already talked in the past about how to tackle the development of NGEDIT. I give a lot of thought to how I invest my time – not in order not to make mistakes, I make many of them and it’s not really a problem. But I like to think and rethink what my goals are, what the best way to reach them is, what the tasks are, and how (and in what order) it makes most sense to work. Once I developed ViEmu, I had quite clear that I had to keep putting a lot of effort until the product reached a “serious” level. Slowing down beforehand wouldn’t make any sense: even if the vi-lovers audience is a small one, the only way to actually discover its size is to get a decent product out there. A so-so product would leave me thinking, if sales are not big, that the problem was the product’s quality.

And now that I’m focusing on NGEDIT, deciding exactly how to work on it is no easy feat. I’ve already talked about this in the past, but I had already decided that I would focus on an NGEDIT 1.0 which had some of the compelling innovative parts – no sense to release YATE (“yet another text editor”) and hope it will bring a decently sized audience. I actually started designing and coding some of the most interesting parts of NGEDIT, even if many of the core parts are yet incomplete.

But now that I’ve already started opening the NGEDIT workspace from Visual Studio, the amount of things that I can do (and that I have to do in some moment) is mind boggling. Just to list a very few:

  • Make the menus configurable… they actually are, but the UI is not there. Of course, this is one-in-one-hundred little things that need to be done.
  • Integrate it well with windows: registry entries, shell extension, etc…
  • Let the user configure the editor window colors – now all the UI elements can be configured, with the exception of the actual text-editing window itself (funny, as that’s the most important part).
  • Finally finish off implementing that DBCS support for japanese/etc… – it’s designed in, but it’s not implemented. Either that, or properly remove support for it for 1.0.
  • Now that ViEmu is so complete… port back the emulation functionality to NGEDIT. The vi/vim emulation in NGEDIT is written in my scripting language, it’s already 4 months old, and really basic compared to what ViEmu does. While I’m at it, properly structure it so that the vi/vim emulation core is shared code between ViEmu and NGEDIT – once again template based, text-encoding independent, and supporting integration with both mammooth and invasive environments, such as VS (where the vi/vim emu core is just a “slave”), and also with a friendlier environment like NGEDIT, that is already thought out to provide custom editor emulation.
  • Clean up many things in the code base – you know, after you’ve been months away from the code, you come back and see many things which are plain malformed. Sure, you also kind of know about them when you are actually coding, but you have to get the job done and you do take shortcuts. I believe in shortcuts, it’s the way to actually advance, but then you want to properly pave those shortcuts into proper roads.
  • Really study all that maddeningly beautiful mac software, understand what the heck makes it so incredibly and undeniably beautiful, and try to bring some of that beauty to NGEDIT.
  • Actually work in the most innovative aspects of NGEDIT – the parts with which I hope to create an outstanding product.
  • etc…

You can see… this is just a sample. Not to mention the many things that I have in my todo list, in my various “NGEDIT 1.0 FEATURE LIST” lists, in my handwritten sketches, etc…

It can be quite overwhelming… where do I start? And worse than that, you need motivation to actually be productive. See, one thing is that I’m determined to work on NGEDIT. Another thing is that I need to give myself something concrete to focus on, with which I feel comfortable strategy-wise, so that I will put all my energy on it.

Not having a single idea on how to tackle a problem is blocking. Having too many ideas or tasks can be as blocking.

This is a common problem when you are developing – sometimes, the only way out of such blocking crossroads is to start from the top of the list, even if it is alphabetically sorted, and work on items one by one. I have found that this works best for me when I am about to finish a release of a product or a development cycle. When, after digging for weeks or months, you see the light at the end of the tunnel, but there is still a lot of digging to be done, a similar phenomenon happens. And what I usually do is visualize the image of the finished product on my mind, and then just work on the items in sequential order (not sequential by priority, sequential by whatever random order they ended up listed on the todo list).

But I couldn’t see myself doing this for NGEDIT now – it’s still to far from any end of any tunnel.

I decided to just start browsing around the source code, thinking about code-architecture issues (I have never stopped thinking about many NGEDIT product details even while I was working 100% on ViEmu), and just spending my time with NGEDIT until the mud would clear up.

And it’s happenned – I’ve seen exactly the right way to approach it.

What is it? I just need to start actually using NGEDIT. I’m so fortunate that I use a text editor daily during many hours, for many tasks, and I can just use that time on NGEDIT – and just implement the stuff I need along the way! It’s clear, I just need to make it the best editor for me, and the rest will follow naturally. No need to prioritize features, no need to do heavy code spelunking, just start using it and implement the stuff as I go.

Which ones have been the first tasks to come out of this? Two quite obvious one – first, I needed to fix a bug in BOM-mark autodetection code, as I tried to use NGEDIT to edit some registry-export files, and I just noticed the bug while using it. And second, and the reason I was working on registry-exports, I need to implement associating file extensions to NGEDIT so that I can just double-click on files! And that requires that I implement command line parsing in NGEDIT (which, of course, was buried somewhere around #53 in the todo list!). Why? Because if I am to use NGEDIT for all my tasks, I just need to open files efficiently now.

It’s incredible how this path is already starting to work – I’m writing this blog post in NGEDIT, and the most important part is that I already feel confident. Confident that I’m on the right track for the earliest possible release date. And that lets me relax and focus on actual work.