[FreeVMS] Re: VMS Device Drivers


Subject: [FreeVMS] Re: VMS Device Drivers
From: Glenn and Mary Everhart (Everhart@gce.com)
Date: Wed May 01 2002 - 00:54:38 CEST


Roar -
Yeah, pretty much you want to try to have only one I/O request going
at a time. Some drivers (e.g. dkdriver) can handle more by internal
queueing where the device allows, but this is unusual and induces
added complexity. Best to just presume one outstanding I/O and
that the driver is gated by UCB device busy (handled in VMS by
code that calls drivers and cleared by i/o posting common code
most of the time.)

More advanced routines clear busy themselves and internally queue IRPs
and manage when they complete them. All the info about an I/O operation
is supposed to reside in the IRP and possibly structures hanging off
it (rare), and the normal completion functions eventually fire off
an AST to get back into the caller process and set the status into
the IOSB from the I/O operation. Thus you start i/o first, and
get an immediate return after the driver has encoded all the user
parameters into the IRP (using FDT code). If something goes wrong there
the return status indicates a problem and the IRP is never posted nor
queued to anything else.
If things worked OK, the immediate return says "success" and the IRP
is queued to driver start-io (which will be called pointing at the IRP
if it was idle, else the IRP just goes on the queue; depends on UCB
busy to decide which.) Then the driver starts the I/O, and device interrupt
eventually fires off the driver ISR which should process, fork down to IPL 8,
then post the IRP. This action sends the IRP to the IOPOST (prio 4) queue
and causes an interrupt to be scheduled there so the iopost code gets
to finish things up. This can mean freeing buffers, copying data to user
buffer, or not if direct i/o, extracting i/o status from the IRP and
queueing an AST to the originating process which gets the i/o status and
fills in the IOSB and incidentally sets the event flag so that $synch calls
will complete. The AST is a special kernel AST, but its completion may
also cause normal user or exec or whatever ASTs to fire off...depending on
mode of the i/o, which is also stored in the IRP. If you issue a $qio from
kernel mode, you can get normal kernel ASTs when it ends...as well as the
special ones you get anyway. Special kernel ASTs are gated only by IPL, but
normal ones are gated by whether AST recognition is enabled also. If not they
just I think get junked but I am NOT sure on that one...would have to
look over the code.

More advanced processing is all in the driver. BTW network drivers have a whole
other set of entries and callbacks that is nothing like the standard IRP
driven mode everything else uses. It is internally much closer to what you'd
see in unix land.

Many VMS standard operations are like unix ioctl routines but you can add your
own
driver routines that normally an ioctl might do in unix. The startio is also
common entry with read and write, gets demultiplexed in a VMS driver.

Glenn Everhart

roart@nvg.ntnu.no wrote:
>
> Hi
>
> Has anyone here made a VMS device driver or are good at their workings?
>
> We also need people to do the I/O subsystem, including transforming the Linux
> device drivers.
>
> I have looked a bit at the I/O Device Drivers and Interrupt Service Routines
> chapter.
> As I have interpreted it: with the simple interrupt service routine you
> can only have one I/O request for the device simultaneously.
> Is that right, or did I miss something?
>
> If I interpreted it right, how are more advanced interrupt service routines
> implemented?
>
> Regards,
> Roar Thronæs
>
> --
> Liste de diffusion FreeVMS
> Pour se désinscrire : mailto:freevms-request@ml.free.fr?subject=unsubscribe
>
>
>

-- 
Liste de diffusion FreeVMS
Pour se désinscrire : mailto:freevms-request@ml.free.fr?subject=unsubscribe



This archive was generated by hypermail 2b25 : Wed May 01 2002 - 00:49:13 CEST