Performance – any way to 'cache' tabbles locally, and 'lazy-load' from db in bac 2015-09-06T20:17:06+00:00

Home Forums General discussion Performance – any way to 'cache' tabbles locally, and 'lazy-load' from db in bac

  • Author
  • xkat
    Post count: 39

    Hi Andrea & Maurizio:

    am wondering what part of “displaying Tabbles” (left column list – expland) is ‘Application’ vs. DB vs. network latency?

    and if i ‘upping’ the PRIORITY will boost performance?

    many tia

  • Hikari
    Post count: 7

    What do u mean with Application?

    The problem I see here is that they really wanna make Tabbles a shared experience. They don’t want sharing to be one of the features, they want any resource to be sharable, and sharing be normal in user’ experience.

    The easiest and probably best way to do it is how they did, get a RDBMS running data and have a Tabbles WinApp running on each PC connecting to it. But it has the drawback of latency and the requirement of being always connected with the RDBMS. And also the resource needed to run the RDBMS of course.

    It was a solid decision, it was implemented, and it has came to stay.

    The problem with caching is how to sync data back to server, and how to merge inconsistences among clients. I mean, the advantage of a DBMS is that data is processed in transactional/atomic behavior. If 2 users try to write on the same data on the same time, the later has to wait the first finish. Writing on different areas can be done in parallel.

    There are apps that support local caching, like Evernote. But then, again, there’s the trouble of implementing multiple data accesses, of implementing their synching and the merging of data, and users also most do manual merging.

    For 1 user in 1 PC, I understand these drawbacks will never happen and we remain with the performance issues. But, at least for now, this is te best solution available for powerful and easy sharing support.

  • xkat
    Post count: 39

    sorry for posting such ambiguous message!
    i thought Andrea/Maurizio might (;) understand the (poorly worded) question because of previous communication.
    nonetheless, this is about the shell extension (pop-up) tagging interface.
    What i was trying to describe was the ability to cache the tabble list particularly for initial tagging (tabbling).
    Thus, I expect the Tabble list wouldn’t necessarily be ‘complete’ at all times, except when after a tabble was ‘created’, ‘deleted’, or ‘re-ordered'(change hierarchy), in the main interface.
    I don’t know how much memory this list would require, and it would ‘lazy-update’ whenever resources were available. The cache would be called when the ‘quick-tag’ was invoked.
    btw, caches seem in place at ‘recent tags’ in the pop-up tag interface “Tag” pulldown.


  • Andrea
    Post count: 892

    Thanks guys for the conversation – solid answer from you Hikari 🙂

    Xkat: what is exactly the problem you’re looking to solve? I’m asking this while the database querying performances have been surprisingly good (and indeed, we have a good technical understanding for that and therefore we’ve been able to optimize that quite a lot), on the other the shell extensions, which was re-written by 3 different people using 3 different techniques, has proven to be unstable slow, and generally unpredictable.

    Do you have issues with the shell extension(s)?

You must be logged in to reply to this topic.

We use technology (including cookies) to collect, use and transfer information to support this Site, including for data analytics purposes, as described in this Site's updated Privacy Policy. Your continued use of the Site signifies your acceptance of such cookies. To learn more about how to manage your cookie settings and how to exercise your rights under GDPR please see our Privacy Policy

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.