Wednesday, November 30, 2016

Delete Cascade with Recursive PL/SQL

If you need to delete all rows in a table that has parent keys for other tables' foreign keys, and the foreign keys constraints have not been defined with "on delete cascade", you can do a recursive delete with the following simple procedure.

This is typically something you will do only in a test or development database, and not in production. As always, it is a good thing to understand this procedure before you execute it:

create or replace procedure delete_cascade(
  p_table_name in user_tables.table_name%type) is
  l_pk user_constraints.constraint_name%type;
  select constraint_name into l_pk
  from user_constraints
  where Constraint_Type='P'
  and table_name=upper(p_table_name);
  for c in (
  select table_name
  from user_constraints
  where R_CONSTRAINT_NAME=l_pk) loop
  end loop;
  execute immediate 'delete from ' || upper(p_table_name);
exception when no_data_found then -- Tables without constraint
  execute immediate 'delete from ' || upper(p_table_name);;

Sunday, October 23, 2016

Displaying Spatial Data in SQL Developer

There are many ways to display spatial data, but when exploring a new data set in SQL Developer I think the built in Map View is practical. For this demo I used a list of nuclear power stations in the world, found via The Google Fusion Table can be exported to CSV format, and it includes the longitude and latitude which makes it easy to convert to SDO_GEOMETRY. After importing the file with SQL Developer into a table called NUCLEAR, I did this to add native Spatial data:

alter table nuclear add geom sdo_geometry;
update nuclear set geom=sdo_geometry(2001,

Since I want to try out a spatial operator, I need to add metadata for the column (or layer) and add a spatial index:

insert into user_sdo_geom_metadata(TABLE_NAME,COLUMN_NAME,diminfo,srid)
  values ('NUCLEAR','GEOM',
create index nuclear_si on nuclear(geom) indextype is mdsys.spatial_index
parameters ('layer_gtype=POINT');

The parameter is not necessary, but when you use this, Oracle can check that all new data has the same GTYPE and will speed up index creation. Not a big deal here, but useful to know for later.

With this in place I can search for active reactors within a distance of 1000km from my home:

select   name,country
  ,10, 'unit=km')) "Distance in km"
from nuclear
where sdo_within_distance(geom,
    ,'distance=1000 unit=km') = 'TRUE'
  and active_reactors>0
order by 3;

NAME COUNTRY Distance in km

A table looks nice, but it sure would be more informative on a map. I added the GEOM column to the query (and removed the others since I don't need them, and also the ORDER BY). The result is this:

Bring up the menu on the result set (right-click or similar on one of the rows), and select Invoke Map View on result set:

You get something like this:

That is not phenomenal. I had this idea that there should be a map in here. Well, a map with countries and their borders are spatial objects in themselves, and since I had downloaded the data files for the book Pro Oracle Spatial for Oracle Database 11g, I could just display the countries within the same distance:

select geom 
from spatial.world_countries
where sdo_within_distance(geom,
  ,'distance=1000 unit=km') = 'TRUE';

and repeat the procedure. I also made this simple query to get an SDO_GEOMETRY object for my own location:

select sdo_geometry(2001,8307,sdo_point_type(10.8000994,59.8301125,null),null,null) 
from dual;

After changing the order, titles, colors, and zoom, the result look like this:

Also note that I didn't make any comments about our neighbours ;-)

Sunday, October 16, 2016

Importing GPX files to Oracle database, part 2

The previous post showed how to import a file in GPX-format into a table that uses the XMLType. This post shows how to import the waypoints from the GPS logger into a table with the native SDO_GEOMETRY type.

First create  a table:

create table gps_log (
  log_time timestamp with time zone,
  waypoint sdo_geometry,
  src varchar2(20));

The datatype for log_time is chosen like shown because the type used in the XML-file (xsd:dateTime) maps to this, which makes importing the time data much easier. (Have a look at the previous post to see this mapping.)

Not strictly necessary at this point, but I want to add metadata about the spatial column (aka layer). This is done by doing an insert on a view that resides in the MDSYS-schema (with a public synonym added). Insert on the view works by instead-of triggers defined on the view, in case you wondered how this works:

insert into user_sdo_geom_metadata(TABLE_NAME,COLUMN_NAME,diminfo,srid)
  values ('GPS_LOG','WAYPOINT',

If you don't want to write this SQL yourself, you can use a nice feature in SQL Developer. Bring up the menu for the table in the Connections window, select Spatial and then Update Spatial Metadata...:

Then you can add the information in the pop-up window:

You can verify that this actually worked with:

select * 
from user_sdo_geom_metadata;

The app I was using used the position from the mobile network sometimes. I guess there was problem with the GPS reception at times. Anyway, the position reported from the mobile network does not include elevation, so I decided to leave it out. This has an implication when choosing the spatial reference ID; 8307 is commonly used for waypoints given by latitude and longitude (and without elevation), see this chapter for details. Another detail is the chosen tolerance; the app reported that the accuracy was around 4 meters from the GPS, usually worse from the mobile network, so I figured 3 meters is OK. (For geodetic coordinate systems the unit for tolerance is meter.)

Inserting the data into the table is pretty easy by adapting the SQL code at the end of previous post:

insert into gps_log(log_time,waypoint,src) 
  SELECT  EXTRACTVALUE(VALUE(t), 'trkpt/time'),
      sdo_point_type(EXTRACTVALUE(VALUE(t), 'trkpt/@lat'),
       EXTRACTVALUE(VALUE(t), 'trkpt/@lon'),null),
        'xmlns=""'))) t;

The following creates a spatial index on the WAYPOINT-column, but before you execute this, make sure the user has the privilege CREATE SEQUENCE in addition to CREATE TABLE:

create index gps_log_si
on gps_log(waypoint) indextype is mdsys.spatial_index

The statement above will return an error stack including ORA-29855 and ORA-13203 if metadata on the column is missing in USER_SDO_GEOM_METADATA. Probably it is a good habit to add it right after the creation of the table as shown previously. Note, if the statement failed, it may have created the index object (although not a useful one), and you may need to drop the index before you try again.

Tuesday, October 11, 2016

OTN Appreciation Day: A database that is reliable

As suggested by Tim Hall, aka,  this post is about my favourite feature in the Oracle world. My favourite product is the good old database, and my favourite feature is simply the fact that I don't lose data!

Never once have I lost data due to software errors, and after many hard landings the database always manages to come back online after wading through the redo logs and make sure the data remains consistent.

I have been working as a DBA at sites where some decided to do maintenance on their UPS, the SAN, the network, or whatever in peak hours, resulting in hard landings that many databases won't recover from.  The worst nightmare for a DBA is to lose data. Yes, we have backups, and we spend time learning disaster recovery. But nothing beats the relief when you initiate "startup" after a crash and Oracle after a little while responds with "Database opened". You may say I've been lucky; software errors that corrupt data do exists, but after thousands of databases and many years with Oracle Database administration, not losing data remains my favourite feature.

My second favourite is all the instrumentation in there, the ability to figure out what is going on when someone complains. The database instrumentation is frequently a good starting place, even when the fault is outside somewhere, like in a badly designed application in another layer.

Thank you, OTN, for the support, information, articles, easy downloads, the ACE program, events, and much more. Have a great Appreciation Day!

Sunday, October 2, 2016

Importing GPX files to Oracle database, part 1

Update 2016-10-16: Found and error in procedure, see below.

In preparation for a presentation on Oracle Spatial for DBAs I decided to import some GPS-data. I have an app called GPSLogger on my Android mobile that logs GPS data to a file. The format for this is GPS Exchange Format (GPX).  Since I spent some time understanding the XML side of this I decided to blog about it.

Note, there are two versions of the GPX schema, 1.0 and 1.1. My app uses the version 1.0, if you try to import a GPX file, have a look in it to verify which schema definition it uses (Look for xsi:schemaLocation in the head of the GPX file).

I followed the information from this blog post (that actually uses version 1.1) together with info from Oracle Support, and lots of googling.

The app can upload to Dropbox which makes it easy to transfer the file to the database server. In addition I needed the schema definition, which you can get from, you'll find the link to the XSD file in the GPX file as mentioned, in my case it was

However, the file needs some adjustment in order to have Oracle parse values of type xsd:dateTime correctly. Open it in an editor and look for the following block:


At the end of this,  add one line so it looks like this:


Then you have to search through the file and search for every definition that uses type "xsd:dateTime" and add:


so it looks like this:

xsd:element minoccurs="0" name="time" type="xsd:dateTime" 

(Take away the line break, I added it to make it readable).
There are four entries, so this manual job does not take a long time. I transferred this file together with my GPX file to a directory on my server (/u01/app/oracle/load_dir). Here is the first part to get started, the first part is executed as SYSTEM:

create user oyvind identified by oracle 
default tablespace users temporary tablespace temp 
quota unlimited on users;

grant create session, create table, create view, create procedure to oyvind;
grant alter session to oyvind;
grant create type to oyvind;

create directory LOAD_DIR as '/u01/app/oracle/load_dir';
grant read, write on directory load_dir to oyvind;

As you can see, I did not granted XDBADMIN to this user.  Update 2016-10-16: The use of DBMS_XDB does indeed require being SYS, the owner of the package (XDB), or having one of the roles XDBADMIN / DBA. Since I did not want an ordinary user to have an admin role, I executed the first part that uses the DBMS_XDB package as SYSTEM. The function below was created in the schema of the ordinary user as well. I must have been experimenting with granting and revoking the XDBADMIN without noticing that the first part did nothing since the resource was already there.

The first step is to load the XML schema definition to the database. I copied the function getClobDocument from the mentioned blog, it is used to load files as CLOB:

create or replace function getClobDocument(
    p_directoryname In varchar2,
    p_filename      In varchar2,
    p_charset       In varchar2 default NULL)
    return          CLOB deterministic
  v_file          bfile := bfilename(p_directoryname,p_filename);
  v_charContent   CLOB := ' ';
  v_targetFile    bfile;
  v_lang_ctx      number := DBMS_LOB.default_lang_ctx;
  v_charset_id    number := 0;
  v_src_offset    number := 1 ;
  v_dst_offset    number := 1 ;
  v_warning       number;
  if p_charset is not null then
    v_charset_id := NLS_CHARSET_ID(p_charset);
  end if;
  v_targetFile := v_file;
  DBMS_LOB.fileopen(v_targetFile, DBMS_LOB.file_readonly);
  return v_charContent;

The following code worked for me to create resource:

  schemaURL varchar2(256) := '';
  schemaDoc varchar2(30) := 'gpx.xsd';
  xmlSchema xmlType;
  res       boolean;
  xmlSchema := XMLTYPE(getCLOBDocument('LOAD_DIR',schemaDoc,'UTF8'));
  if (dbms_xdb.existsResource('/home/' ||schemaDoc)) then
      dbms_xdb.deleteResource('/home/' ||schemaDoc);
  end if;
  res := dbms_xdb.createResource('/home/' || schemaDoc,xmlSchema);
  if res then
  end if;

Make sure you turn on server output and get a proper message to indicate if it succeeded or not; no error message was returned when it failed.

The rest of this is executed as user oyvind (Update: also create the getClobDocument from above to this user). Next step is to register the schema, I used the following code:

  schemaURL varchar2(256) := '';
  schemaDoc varchar2(30) := 'gpx.xsd';
    local => true,
    genTypes => true,
    genTables => false,
    enableHierarchy => dbms_xmlschema.ENABLE_HIERARCHY_NONE

Use the following code to create a table to store the document as XMLType:

    OF XMLType (CHECK (XMLIsValid(object_value) = 1)) 

Then, finally, you can load the GPX file with this:

VALUES (XMLTYPE(getCLOBDocument('LOAD_DIR','20161001.gpx')));

To see some of the data, you can extract what you find interesting with:

       to_number(EXTRACTVALUE(VALUE(t), 'trkpt/@lon'))  longitude,
       to_number(EXTRACTVALUE(VALUE(t), 'trkpt/@lat'))  latitude,
       TO_NUMBER(EXTRACTVALUE(VALUE(t), 'trkpt/ele'))   Elevation,
       EXTRACTVALUE(value(t),'trkpt/src') src
      TABLE(XMLSequence(extract(g.OBJECT_VALUE,'/gpx/trk/trkseg/trkpt','xmlns=""'))) t;

This is an example of what comes out (it was a short walk):

10.79831903 59.83451229 85 gps
10.80009538 59.83263633 170 gps
10.80033655 59.83156553 146 gps
10.8003321 59.8303762 network

In next post I plan to use the Spatial datatype to store these points. I may do another test with GPX version 1.1 in the mean time since that is the version used by my Garmin GPS.

Some tips if you need to clean up in case of errors;  if you have the recyclebin enabled, do a "purge recyclebin" after dropping the table with the XMLType. I used to following command to delete the schema:

purge recyclebin;
exec dbms_xmlschema.deleteschema(schemaurl => '');

Get in touch if you need a copy of the final gpx.xsd file.

Friday, September 30, 2016

Oracle Open World 2016 Afterthoughts

So I went to Oracle Open World again. For me this year it was partly vacation, networking, and inspirations for work. I went on my own, to a few sessions, spent a lot of time on Oak Table World, hanging out in the OTN Lounge, socialising, and at meet-ups.

On my way home I put down some of the takeaways:

  • There were a lot of clouds, but it didn’t rain.
  • There are a some speakers who manage to go back to stuff I’ve seen before, yet inspire me with new ideas.
  • Some presenters are so inspiring that I went to sessions that I thought were almost irrelevant for me.  It happened more than once that I had a “Huh, this is cool!”-moment.
  • I am thankful for being part of the ACE program, and it is a great privilege to meet more or less likeminded people from all around the world. To stay inside it, I have to keep learning and share with the community, another good motivation for a better career.
  • OOW, despite being one of the most commercial conferences I go to, is the best opportunity to meet all these people, I’ll do the same next year.
  • Again I met some very helpful people from Oracle, people I like to talk to about everything, from Apocalypse to Zookeeper.
  • These people reminds me that Oracle is more than a machine, and the software I work with every day is made by both smart and nice experts. Next year, join the biking on Saturday, or the running over the bridge on Sunday to see what I mean.

OTN which is responsible for the ACE program does a lot of cool stuff like the IoT workshop, and much more. I hope the team do understand they have many friends around the world. Some of us plan to show this in near future, check this blog post by Tim Hall aka Oracle-Base:

Next year, October 1.

Thursday, July 21, 2016

24 Basic Tips on Using SQL Developer

I have been using SQL Developer since it required some goodwill to work. Some of the motivation came from an annoyance with Toad sessions in the database. I remember that I back then googled "how to block Toad users from the database".  

Anyway, these days most developers around me are using SQL Developer. But quite a few are learning only the least to get their job done. The following list is what I use often, and some of it results in "How did you do that?" when they are looking over my shoulder. The shortcuts here refer to the Windows version since that is what I'm currently using at work. You can easily remap them from Preferences.

  1. Use templates from preferences to create shortcuts to frequently used SQL statements, or fraction of it. You'll find it under Database - SQL Editor Code Templates
  2. Speaking of preferences, there are lots of stuff to tweak, if you are looking for something special use the search field in the upper left corner
  3. Templates are useful, so are snippets which can be anything from functions and hints to small scripts. Check out View -> Snippets, and when you see something cool, just drag it into the editor window.
  4. Drag the table from the list of tables in the Connections window to the worksheet, and it will offer to generate SQL for you (INSERT, SELECT, etc).
  5. Copy rows from Query Result with Shift-Ctrl-C to get the column headers. Paste it directly into Excel or in Outlook Email. In the latter you can use the "Convert to text" function to get a nice looking table in a few steps.
  6. Attach shortcuts to worksheets with Alt-Shift-number (1-9). Switch between them with Alt-number
  7. If you have used one of those tools that requires you to mark the statement you want to execute, you don't have to do this with SQL Dev; with the cursor somewhere on the statement just hit Ctrl-Enter.
  8. To convert text to and from upper, lower, or initcap, mark the text and hit Ctrl-' (this one was not set on my Mac version, if that is your case, go to Preferences, search for shortcut, and in that window search for 'other'). Repeating it will cycle through upper, lower, and initcap.
  9. How to insert a list you receive into a table: Say someone sends you a list of 10 rows in an email, go to the Data tab for the table and add 10 rows. Copy the rows from the email and put the cursor on the first row in the data tab, and paste them in with Ctrl-V. SQL Dev will paste them in as you expect in Excel. This even works with two or more columns if the columns are separated with tabs. 
  10. Using Subversion (or Git) from SQL Dev saves you a mental context switch compared to using an external client. By opening SQL scripts in SQL Developer, edit, save, and commit them (to the source control system) your work flows better.
  11. Reformat code with Ctrl-F7. Makes it easier to read when you have copied and pasted generated code from somewhere.
  12. Get instant info about a table. Put the cursor on the table name in the worksheet and hit Shift-F4. A window pops up with lots of vital information, and it is quicker than to search for it in the hierarchy. 
  13. Hit Ctrl-G to go to a specific line in your editor window. 
  14. After you have become more productive and live your life in SQL Dev, it may need more memory.  See this post for details: In short, uncomment the line in product.conf that starts with AddVMOption, and change the value from 800m (default) to say 1024m. 
  15. Save query output for future reference by using the red pin in the Query result window. You can go back and see what SQL produced it by clicking the blue "SQL".
  16. When you look at the indexes for a table in the Pop-up info box (Shift-F4), you can open the declaration for the index by right-clicking on a row for an index, and choose Go To...
  17. Right-click on an editor tab and choose Float. It will move the worksheet tab to a separate window. Then you can click on other tabs and drag them to that window and group connections that belong together.
  18. Change the DATE-format in the Preferences, the default format is usually not enough for me, so I add HH24:MI:SS 
  19. DBAs and others often work on many tasks at the same time, some scripts or SQL commands take long time to execute. Instead of checking back once in a while for its completion open the Task Progress from the View menu. It will show tasks that are running, completed or failed. Click the icon on the right side in order to remove finished tasks.
  20. SQL History (F8) is useful, even as backup of previous complex SQL that you have been working on, but didn't save before the crash...
  21. Use Connection Color when setting up a connection towards a production database. I use red and it will add a discrete line around the edge to remind me not to do something stupid. This is much better than having the entire background bleeding when doing serious business.
  22. Hit F10 to get the execution plan for the statement where your cursor is.
  23. If you need to get to your database through a jump server, see my blogpost about it.
  24. Code completion is very useful, but sometimes it doesn't work. There is a limit to how much processing and parsing SQL Developer can do before your patience runs out. It helps to at least finish previous statement with a semi-colon or something that clearly marks the start of next statement, and write correct SQL.
There is a lot smartness built into the tool, sometimes a bit hidden; to not get in your way. If you think SQL Developer can't do something, just try it out. So far I have never received any obscene error messages.

Monday, May 23, 2016

Open Session Error when connecting to Hive from SQL Developer

This blog post from Oracle explains well how to connect to Hive from SQL Developer. In short, you need to download the JDBC driver from Cloudera and make sure you use the correct version, version 4 and not 4.1 Three zip files are included, use the one that starts with Cloudera_HiveJDBC4_2.*.

Anyway, after adding the jar files to SQL Developer, and restarting it,  the connection failed with:
Error setting/closing session: Open Session Error
Not really informative for anyone used to ORA error codes.


I tested the connection locally with the beeline command that is included with Hive. It worked OK when connecting with the URL jdbc:hive2://, but when I connected with URL jdbc:hive2://rio:10000 (rio is the hostname), it returned the error:

hadoop is not allowed to impersonate anonymous 

I tried options -n and -p to connect as user hadoop, but the error message was similar:

hadoop is not allowed to impersonate hadoop
Turned out that I needed to change the configuration of Hadoop itself, and added the following to  core-site.xml:

Yes, this is lax security, but OK in my lab. After a restart of hdfs etc, SQL Developer connected successfully to Hive. Proxy user in Hadoop is explained here.

Running simple queries from Hive, either from beeline or SQL Developer is slow, which makes you think twice before you submit a query, unless you are going for a cup of coffee.  That is a good thing, I guess.

Tuesday, April 19, 2016

Undo old mistakes with online table redefinition

There are times when you as a DBA wished you were involved before someone designed a data model. After release to production correcting mistakes is usually complicated and with risks.

Update 2016-04-20: Fixed errors in code.

There is an exception to this. Online table redefinition lets you change the structure of a table while everything is up and running. I have used it lately to partition large tables. During the process I also added compression, and moved some of the partitions to another tablespace that I have set read-only, in order to reduce backup time as well. LOBs in one table were moved to SecureFiles with deduplication.

Redefining tables online is explained in the Administrator's Guide, chapter 20 (11g and 12c). The PL/SQL package DBMS_REDEFINITION is used along SQL to create a new table with the desired structure.

The whole process can be quite simple, depending on how much you want to change, though it may require some time. Of course, in development, you can repeat the whole process in order to improve on the design in case you are not satisfied. In theory this can also be achieved in production, but I do recommend that you get tired of fixing this in development before you move on to production.

The example is made simple to get an overview, check the documentation for more details.

I did the whole process as user SYSTEM, but it may be executed as the owner of the table provided execute privilege on the package has been granted along with CREATE TABLE and CREATE MVIEW.

  1. Verify that your table can be redefined this way: 


    The last parameter is for tables without a primary key (PK) defined, the default value here is dbms_redefinition.cons_use_pk that applies to tables with a PK. In other words, if your table has a PK (as it should have), skip this last parameter here and in step 3. If no error is returned you are good to go.
  2. Create an empty interim table with the structure you want. Spend some time here in order to avoid new issues. I used SQL Developer to generate the SQL for the original table and edited  it to create the new table.  This is rather simple if all you want is to introduce partitioning or similar that doesn't change the columns and datatypes. Also note, remove all constraints on this interim table, they will be copied later. 
  3.  Start the process with:


    The 4th parameter is null in this example, but is used when you want to reorder columns and need to use column mapping between the two tables. This statement starts the copying of rows from the original to the interim table, and will take some time depending on the amount of data. You may speed things up by enable parallel DML and query with:


  4. Next step is to have Oracle copy all dependent objects of the table. That is, indexes, constraints, triggers, privileges, stats,  and more.

      l_err pls_integer;
      , int_table=>'BIG_TABLE_TMP',num_errors=>l_err);

    This procedure actually has options to skip certain object types like indexes, or triggers, etc. Check the documentation for details. Again, the amount of time this takes depends on the size of it all. But it is kind of cool that the Oracle database is working hard for you while you read Oracle blogs. If this procedure fails, it may be that you created the interim table (BIG_TABLE_TMP) with a NOT NULL constraint; easy to leave in there. Just remove the constraint and repeat this step; no need to start over from scratch.
  5. Next step is to finish it all with:

    When this completes the dictionary has been updated and the table with name BIG_TABLE will have the structure you created in BIG_TABLE_TMP, and vice versa. You can now drop the BIG_TABLE_TMP table.

As usual with online operations there are some locks involved when the data dictionary is updated, so you may chose to finish the operation off peak time. Also in order to shorten the time spent in step 5 you may use DBMS_REDEFINITION.SYNC_INTERIM_TABLE   between step 4 and 5 to refresh the interim table first. Lastly, if you want to abort the procedure between step 3 and 5, just execute the ABORT_REDEF_TABLE procedure.

This feature has been around for some time, and is very stable. If you have SQL not performing well because a table grew too fast, or you have some other structural problems, this feature is really a low-hanging fruit. The idea behind here is that you don't touch the original table, but let Oracle do it for you when everything is ready (in step 5).

As always, keep it simple, life as a DBA has more than enough of excitements.

Wednesday, February 17, 2016

SQL Developer and SSH for DBAs

In version 4.0.3 SSH was introduced in SQL Developer; here's a post from Jeff Smith about it. Now this feature has become even easier to use with SSH configuration made independent of the database connections. 

DBAs typically have a jump server they connect to in order to login further to the database servers. With the new implementation (I'm using version 4.1.3 at the moment) you setup one SSH host, and for every listener you want to connect to you configure a tunnel through the jump server.

Go to View -> SSH to show the window for SSH configuration. Add a host with a right-click on "SSH Hosts". To begin with it is sufficient to add just a descriptive name, a hostname or IP address, and your username. Then with a right-click on the host select "New Local Port Forward" as shown here:

The name should be the database name if you use dedicated listeners for every database, or a descriptive name of the listener if it is used by many. If you are using RAC, don't use the hostname for the SCAN listener, but one of the vip-addresses. Also use the default "Automatically assign local port", because there is some nice magic behind the scene here.

Where I work I typically have many connections to the same database, each with a different Oracle user. With the new SSH feature in SQL Developer it gets very easy to setup. For every new connection select Connection Type SSH and you pick from the Port Forward list as shown here:

(By the way, did you notice I changed the Connection Color? One of my favourite features when dealing with production databases).

That's it! The point here is that you reuse each port forward aka tunnel as often as you need, you no longer need to configure the same SSH tunnel for each database connection. And if the listener is moved you change it only once. Exactly what port is used locally is taken care of, which makes this much easier to setup and use than creating the tunnels in Putty or SSH command line since you don't have to remember the local port.