We’ve recently developed new resources to help clients achieve their Microsoft certification goals. We now offer a directory of Microsoft certification training, organized according to specific certification paths and the associated courses. More specifically, we have the Microsoft SQL Server 2008 certification training.
This SQL Server Reporting Services 2008 webinar is a quick overview of the the new layout and features of SQL Server Reporting Services 2008. Read the rest of this entry »
This MySQL webinar recording includes concepts from our MySQL training classes to teach you how to use MySQL Workbench for data modeling. The webinar will look at the uses of data modeling and its ability to easily convert your data model to physical model, and vice versa. Read the rest of this entry »
There are many methods one can use to import data from operating system files into Oracle tables. These methods include using any of the following: the SQL*Loader utility, the utl_file package and the import option in Oracle SQL Developer. In this article, I discuss how to easily import data using SQL Developer.
In SQL, queries that involve the wildcard character at the beginning of a search pattern are very inefficient because no index can support the wildcard. Applications that require searching, for example, domain email addresses, involve a query like:
SELECT customerID, email FROM customers WHERE email LIKE '%.co.uk';
This type of query will be executed as a full table scan since as mentioned, database indexes do not support the wildcard at the beginning of the search pattern. In this article, I discuss an approach that will greatly improve the above query. This approach involves a little tweaking of the stored data and the application queries. Read the rest of this entry »
In an earlier post, I showed how to use Oracle SQL Developer Data Modeler to reverse engineer an existing schema. Frequently, however, you don’t have an existing database but need to create a data model from scratch. For this purpose, the Data Modeler provides an easy-to-use functionality.
Assume that we need to model the classical structure CUSTOMER places an ORDER. This tutorial shows you how to create an entity relationship diagram (ERD) to model this structure.
In Part II of this three-part series, I described a process to remove all but a portion of a large database table. There is a problem with that approach: because of the TRUNCATE statement, there will be a period of time when the table appears empty. This may not be a desirable situation, especially in an environment where several users access the database concurrently.
As I mentioned in that article, the alternative DELETE is not the best solution since this statement puts locks in the rows that are marked for deletion. In this article, I describe a process that uses the DELETE statement, but prevents the database from locking a large portion of the database for too long. In this approach, we add the LIMIT clause to the DELETE statement to tell the database to apply the deletion incrementally. This approach uses a procedure so that the deletion can be done iteratively.
In a previous post, I discussed how to efficiently delete data from large tables in MySQL. In many situations, however, you need to keep a small subset of the data and remove the rest. In these cases, the TRUNCATE statement cannot be used because it does not allow you to selectively delete but instead will delete all rows.
The immediate solution that comes to mind is to go back to the good, old DELETE…WHERE statement to specify which records to remove. However, this will result in the same inefficiency issue discussed in the earlier post, especially for large tables. In this article, I discuss an approach that will take advantage of the efficiency of the TRUNCATE statement while retaining a portion of the large table. Read the rest of this entry »
In a production environment, the database tables grow quickly and at times well beyond the initial estimates of the DBA. One of the most challenging tasks in this environment is the operation on large data sets as other concurrent processes are accessing the same data. This means that any table operation has to finish as quickly as possible to minimize any delay that might be caused by the operation. Read the rest of this entry »