воскресенье, 13 марта 2011 г.

At speed of light - Part 3


  A data synchronization engine was another challenge for our team.  We did dozens of tests and much work on investigation of SQL Server behavior.  But now we have nearly perfect engine, which is as fast as powerful.
  To show the performance let’s take a scenario of transferring table data from one database to another.  We did a copy of AdventureWorks2008R2 database, truncated every single table, and compared an original database with the modified one. Thus all records in all tables have ‘new’ status, and there are no ‘equal’, ‘different’, and ‘missing’ rows.  All data compare/transfer software products have two synchronization options: save synchronization script to a file, or generate the script on-the-fly and execute it against the target database.  That’s our following two tests are designed for.

  Test #1 – Save script to file.
Click to enlarge
  In this test we emphasized two criteria: the output file size and the script generating time.  As usual we did several runs and chose the best time for each product.  Look at the diagram and you’ll find that our synchronization script is the smallest one, moreover our engine generates it only in 3.4 seconds! while our competitors do this at least 7 times slower.  This incredible result shows how computer resources could be highly utilized, when code is written by true professionals.

  Test #2 – Synchronize on-the-fly.
Click to enlarge
  In this case the performance is mostly depends on your SQL Server, but there are some rules exist to get the most of it.  As you see, our approach works 1.5 times faster than ‘Competitor 1’ and 1.8 times faster than ‘Competitor 2’, because our engine take into account all possible factors and builds the most optimal data synchronization plan.

  This is the end of development of product's back-end, so now we are fully concentrated on the user interface, which promises to be as glorious as convenient.  Thanks for reading, and stay with us :)

At speed of light - Part 3  (current article)


четверг, 30 декабря 2010 г.

At speed of light - Part 2

  Today we finally finished the data comparison engine, and it was the most complicated part of the product.  It took about 1 month to design and implement it, and now we are ready to announce how powerful it is.

Click to enlarge
Data comparison
  OK, the first test was very simple - compare a database with itself and measure the time taken.  All compared records are identical, which are not stored by default in a program's cache, plus reading the same database (source and target sides) minimizes disk load, thus the test emphasizes the comparison speed.  Like for a set-up step, we chose the same two competitive software programs, because they show the best results among all available solutions.  Of course, we are not going to reveal their names or vendors, but we leave some clues..  The first database we've tested was a well-known demobase AdventureWorks. As you see on the diagram, the competitive solutions show identical performance, but our engine is 2.2 times faster.  Moreover our engine works a bit better with remote server, unlike competitors which are slower in this case, and shows boost of 250%.  Then we decided to repeat this test on a larger database to be more precise at performance measuring, so we get a 45Gb database from city local shop (thanks to our friends).  Competitors finished with 1 hour and 40 minutes, but our engine did it in 40 minutes only, confirming the performance of previous tests.

Click to enlarge
Data caching
  The second test emphasizes the data caching speed - how fast a program put different records into a cache on a disk, which is used to view table data after comparison and for data synchronization.  For this test we created a simple table with 4 columns, filled it with 10 million records, and created a copy of this table for target side, but left it empty.  Hence there is no data compared - only reading and caching of source records.  In this scenario our engine is 1.5 times faster than competitor 1, and.. well, the competitor 2 is really slow and it prepares data for previewing for additional 33 seconds, what compel a user wait for painful 1 minute 12 seconds in total to see results against our 15 seconds.

Limit reached?
  Both yes and no.  Our engine is created by true professionals, so every single line of code is optimized, and the engine uses a CPU to the max, that's why "yes", we've reached the limit.  On the other hand our engine is highly optimized for multi-processor/core and 64-bit systems, so as more cores as more performance you get.  We believe that on high-end servers our engine could be up to 20 times faster than the best competitors, but this is only a theory which we've not yet been tested.  That's why "no", the results we showed could be better, as for tests we used a simple developer machine with Intel Core i5 (4 cores @ 2.27GHz), 4GB RAM  DDR3, single HDD, and Windows 7 Pro x64.  Moreover, our professional developers know how to make the most of performance, but it would take much more time and would be extremely hard to implement.

  We do our best to make our solution available to you as soon as possible, but we prefer quality to satisfy people with 'perfect software'. Thanks for reading!

At speed of light - Part 1
At speed of light - Part 2  (current article)
At speed of light - Part 3

четверг, 9 декабря 2010 г.

That's the performance we are talking about

  Although we have only 30% of data comparison engine done (see Data Compare SQL), we finished its skeleton and did the first tests today.  If you noticed, we say "the fastest comparison speed ever", and it's time to adduce the first proof..
  Optillect's data comparison engine works 2.2 times faster than the best competitive solution available.  Please note, the engine is not yet complete, so we expect the final result will be more than that.
  In order to measure the performance, we created a simple table with 4 columns (int, int, bigint, uniqueidentifier), added a unique clustered index on first column, and filled the table with 10 million records.  Then just did several runs measuring the time taken for data comparison.
  Thus we show you the result for a simple synthetic test, so the final performance should differ from given one, because we'll do tests on real databases.

понедельник, 22 ноября 2010 г.

At speed of light - Part 1

  This trilogy is dedicated to Data Compare SQL with intention to show you what we’ve done while the product is under development.  The whole data comparison/transfer thing comprises 3 general steps: set-up, comparison, and synchronization.  At the first step a user chooses servers and databases to compare, including adjusting schema/object mapping and comparison options.  At second step the program compares data in tables according the specified user settings.  And the last step is about selecting what to synchronize, adjusting synchronization options, and the data synchronization itself.  At each of these steps the performance is the crucial point for large databases (I’m talking about thousands of tables and terabytes of records’ data), because e.g. saving 1 microsecond (10e-6) on each record comparison will decrease the total comparison time by 15 minutes for 1 billion of records.  We have all required knowledge to put best algorithms and methods together, thus we promised to create the fastest product ever, and we really will.
  Today we finished the first step, a set-up, and got astonishing performance!  But before to show you the results, let me clarify what do we mean saying ‘set-up’.  Let’s start from the very beginning: you select a source server, then a target server, connect to both of them, and choose a pair of databases to compare.   At this point a program must read databases’ metadata to construct an object model of these databases, i.e. describe tables and views, their columns, and so on.   After a program has two object models of two databases, it finds matches between objects by theirs names.  This step we call ‘mapping’.  Moreover, any good software performs deep analysis for every matched pair of columns to find out whether data types in that columns are compatible to compare and to synchronize, which warn a user about possible data loss when source database schema differs from target one.  This analysis is considered as a part of mapping.
Latency chart on constructing database
 object models and mapping of 22,000 tables
  In order to test the performance of database model constructing and objects mapping we took a really huge SQL Server database with approximately 22,000 tables (thanks to our friend Mike).  And, of course, we picked out two competitive software programs.  We won’t call their names or their vendors, but we suppose there are only two adequate products that pass Optillect’s internal standards of quality and performance.  Then we did 10 runs for each product, measured time for each run, and chose the best time for each product.  As you see, the results on the chart say by their selves.  After three weeks of designing Data Compare SQL’s architecture and two weeks more for implementing step 1, we can proudly claim that our approach works 4-5 times faster than any best competitive solution available.
  In the end, I’d like to say that we spend much time on writing unit tests to ensure product’s quality, and at the moment we have more than 3,000 of tests which cover every possible case.  When we finish step 2 (data comparison) I will certainly share the results with you by publishing them in ‘At speed of light – Part 2’.  We sincerely believe you’ll be enjoying our solutions :)  Thanks for reading!



At speed of light - Part 1  (current article)

вторник, 12 октября 2010 г.

SQL Server data comparison - here we go round again

  Today we officially launched new project designated to compare data in tables of different databases, analyse it, and transfer to (synchronize with) another SQL Server database (Data Compare for SQL Server).  You must noticed that we mention "The most powerful data comparison and data transfer tool for SQL Server".  Doubt?  So we have really solid argument on it - we made this tool once before.  If you are surprised, don't try to find something like "Optillect SQL Data Compare" right now - you won't, but let me explain.  The lead developers of a well-known tool (to compare data in SQL Server databases) joined Optillect Team from the very beginning.  Having so great experience in that area we decided to start the project from the point zero, which allows us to design more perfect program's infrastructure and introduce fresh ideas and concepts.  By now we have already tested some innovations and got outstanding results.  Thus we have all required knowledge, and it's time just to put everything together..
  We make you sure the final solution will be as excellent as claimed.  Stay with us to be happy :)

понедельник, 11 октября 2010 г.

Quality Assured

  One month ago we launched SQL Decryptor Beta, but today we released it.  The only thing we added to new version is an ability to choose a SQL Server instance from a local network.  As for bug-fixes.. we didn't any :) because the absence of such.  Yes, we made a high-quality software at one push as promised, counting more than 600 downloads, several positive feedbacks, and any single user issue.  Of course, you might yell "Any bugs? Impossible! Every software has at least one minor issue.", and you will be absolutely right.  In fact, Optillect's QA Team did really good job, and left programmers snowed under with 30 issues on such simple application.  BUT all these problems concern the convenience of GUI and don't touch the back-end, and most of them extremely hard to spot, so you'll probably never face them.  Don't worry, we'll eliminate all issues for sure, because our target is the perfect quality :)
  Wonder how we reached such level of quality from the word go? To be honest, it was pretty easy for Optillect's professional team, because we used to do everything ingeniously and scrupulously as experts with huge experience and diverse skills.  This is first key to success, and the second is the latest technologies..  Since WPF evolved it became much easier to separate UI from bussiness logic than ever before.  This advantage let us completely test application's back-end with unit tests before it reaches QA Team.  Thus programmers are totally responsible for the program's behaviour (developed by specification), but testers have much more time to concentrate on the UI and concepts, rather than do comprehensive check by the same specification.  However, we have everyone engaged in various project's tasks, rather than entrust a particular job to a particular specialist.  This approach helps everyone in a team understand the whole project and it's final target, therefore bring a perfect solution considered in details.
  Stay with us and we guarantee satisfaction and excellence in Optillect's next products :)

среда, 8 сентября 2010 г.

Why SQL Decryptor

  Hi everyone, this is my first post and if you found it by using SQL Decryptor, I'd like to clarify why we did this product.  SQL Server allows to hide object definitions by encrypting them, thus no one can see it again, even an administrator.  So, if you loose the original script, there is no free way to restore it.  Was no free way..  Unfortunately (yes, unfortunately), it's easy to decrypt: some input data, deprecated RC4, byte-wise XOR, a bit of magic - and voila!  Thus we hope SQL Decryptor will help you to get rid from the option which makes no sense.
  Well, that was not the true reason.  After the company was found, we wanted to put everyone together and just warm up.  Thus the idea of a simple tool was born..  Now, after two weeks of relaxed work we have well-organized collaboration between all team members and the first version of the SQL Decryptor into the bargain :)
  We hope you enjoy and will stay pleased with our present and future products.  If you have no idea what is SQL Decryptor, you can get acquaint with it on our official web-site http://optillect.com/products/sqldecryptor/overview.html