[storm] future database support

Gustavo Niemeyer gustavo at niemeyer.net
Tue Jul 17 17:54:47 BST 2007


Hey Willi!


> That was me ;-)

Aha!  Nice to see you around. :-)


> The backend itself was no problem. Getting the tests to work, is
> harder ;-)
>
> Anyway, these are the problems so far:

Are you working in a Bazaar branch?  If so, can you push it up
to Launchpad?  We (and others) can follow it this way.


> * AUTO INCREMENT (mysql), SERIAL (postgres) columns
>
>   Oracle doesn't have these column types.
>
>   Solved with oracle sequences and "after insert" triggers.

Storm doesn't really care about incrementing rows.  It works fine
with or without them, and with any rows that have database-defined
defaults.

Check below on insert identify for more information.


> * Binary inserts: oracle has to know, if a value is inserted in a
>   binary (raw) column. The following won't work if the column "bin" is
>   a blob:
>
>     conn.execute("INSERT INTO bin VALUES (10, 'Blob 30')")
>
>   Instead, it should be:
>
>     conn.execute("INSERT INTO bin VALUES (10, ?)", CharsVariable('Blob 30'))
>
>   Now the backend knows that it is a "raw" value and can make the
>   right variable binding. But it also means, that some tests must be
>   changed.

I see.. changing them shouldn't be an issue.

>   (Side note: the name "Chars" for raw data confused me a lot!)

Yeah.. I acknowledge that it wasn't a good idea now.  We'll fix
that in 0.10.


> * SQL statements without table: oracle needs a table name in the
>   statement. This won't work:
>
>     def test_execute(self):
>         result = self.store.execute("SELECT 1")
>
>   Here the pseude table "DUAL" is needed:
>
>     => "SELECT 1 FROM DUAL"
>
>   How can we solve this in a generic way? I'd like to avoid parsing
>   SQL Statements in the backend, only to find out if the "from dual"
>   is needed...

We definitely won't be getting into the SQL parsing business.  In that
specific case, we should make the tests in DatabaseTest be more friendly
to Oracle.  I think we can use "SELECT 1 FROM test" in most of them,
without sacrificing the test goal.

Besides that, we should also make sure that Storm is able to compile
things like Select(1) correctly.  That may be done with something
as simple as the following (in the backend!):

@compile.when(Select)
def compile_select_oracle(compile, state, select):
    if select.default_tables is Undef:
        select.default_tables = "DUAL"
    return compile_select(compile, state, select)


> * Insert Identity: i dont know, how to find the last inserted record
>   in oracle. Any ideas?
>
>   (Currently i take the record with max(id), but thats only a
>   workaround.)

What Storm wants in this case is a way to identify the row that
was just inserted, so that it may be mapped back into the in-memory
object.

With MySQL, Storm will use the lastrowid attribute of the cursor
object. With SQLite, it will use the "OID".  Oracle seems to support
some kind of magic "rowid" attribute as well.  Do you know if there's
a way to get the rowid for the just inserted row?


> * Pagination: the "expr" module uses the LIMIT and OFFSET tokens, but
>   oracle doesnt support these.
>
>   This could be solved with the oracle pseudocolumn "ROWNUM" and
>   subselects. But i havnt done it by now.

This should be easy to fix, but I have no idea about what Oracle
does in this case.  Can you point me to some documentation and
examples about that?


>   And: how can we make the "expr" module backend aware?

The expr module is fully backend aware.  You can see expressions being
customized at will in other backends.


Thanks for working on that!

-- 
Gustavo Niemeyer
http://niemeyer.net



More information about the storm mailing list