(→The <karma/> Tag)
(→The <karma/> Tag)
|Line 2,987:||Line 2,987:|
! Setting !!<tt>c2s</tt> values!!<tt>s2s</tt> values!!Description
! Setting !!<tt>c2s</tt> values!!<tt>s2s</tt> values!!Description
| <tt><init/></tt> || 10 || 50 || The initial value for
| <tt><init/></tt> || 10 || 50 || The initial value for <tt>karma</tt> on a new socket.
| <tt><max/></tt> || 10 || 50 || The maximum <tt>karma</tt> value
| <tt><max/></tt> || 10 || 50 || The maximum <tt>karma</tt> value that can be attained by a socket.
| <tt><inc/></tt> || 1 || 4 || By how much the <tt>karma</tt>
| <tt><inc/></tt> || 1 || 4 || By how much the <tt>karma</tt> value is incremented (over time).
| <tt><dec/></tt> || 1 || 1 || By how much the <tt>karma</tt>
| <tt><dec/></tt> || 1 || 1 || By how much the <tt>karma</tt>
| value is decremented in a penalty situation.
| value is decremented in a penalty situation.
| <tt><penalty/></tt> || -6 || -5 || The <tt>karma</tt> value is
| <tt><penalty/></tt> || -6 || -5 || The <tt>karma</tt> value is plunged to this level once it falls to <tt>0</tt>.
| <tt><restore/></tt> || 10 || 50 || The <tt>karma</tt> value is
| <tt><restore/></tt> || 10 || 50 || The <tt>karma</tt> value is boosted to this level once it rises (after a penalty) to <tt>0</tt>.
The relationship between an entity's karma and how much data it is
The relationship between an entity's karma and how much data it is
allowed to write to the socket is linear; in fact, the amount is:
allowed to write to the socket is linear; in fact, the amount is:
Revision as of 03:06, 14 September 2006
Server Architecture and Configuration
If you followed Chapter 3 through, you should now have a Jabber server of your own up and running. If you also made the configuration changes described there, you may be curious to find out about the other 99 percent of the configuration file contents--what it does, what sort of structure (if any) exists, and how you might modify the configuration to suit your own requirements.
On the other hand, if you want to press on with learning about the the protocol and looking at the recipes, you can safely skip this chapter right now and jump to Chapter 5. Whenever you want detail on specific server configuration, you can come back here at any time. Indeed, we'll be referring to parts of this chapter throughout the rest of the book.
Despite the initially daunting and seemingly random nature of the jabber.xml file contents, there is a structure to the configuration. This chapter will take you through that structure, explaining how all the pieces fit together and describing what those pieces do. In order to understand the configuration structure, we examine the nature of the server architecture itself. This architecture is reflected in that structure, and if we are to understand the latter, it helps to understand the former.
Indeed, in order to take the best advantage of what Jabber has to offer in terms of being a basis for many a messaging solution, it's important to understand how the server works and how you as a programmer fit in. Jabber programming solutions can exist at different levels within the Jabber architecture; understanding this architecture can help you make better decisions about what needs to be done to build a solution.
So in this chapter, we'll take a look at the Jabber server architecture and follow that by an in-depth tour of the server configuration in jabber.xml. Finally, we'll have a look at some of the server "constellation" possibilities--how you can organize parts of the server to run over different hosts and how you can make the server host multiple virtual server identities.
An Overview of the Server Architecture
In order to understand the configuration directives and how they work, it is necessary to take a step back and look at what the Jabber server really is.
jabberd and Components
The Jabber server is a daemon, jabberd, that manages the flow of data between various components that collectively make up the Jabber service. There are different components, each of which performs different kinds of tasks, and there is a basic set of components that is required for a simple Jabber server such as the one we configured and installed in Chapter 3.
The following list shows what the basic Jabber components are and what services they provide. It's worth considering the original and most well-known application of Jabber--instant messaging--and a Jabber design feature (distributed server architecture) to put this list into context and make better sense of it.
- Session Management
- We need to be able to manage users' sessions while they're connected
- to the server. The component that does this is called the Jabber
- Session Manager (JSM), and it provides IM features such as message
- store and forward and roster management, as well as session
- Client (to Server) Connections
- This is the component that manages the connections between clients and
- the server. It is known internally as c2s.
- Server (to Server) Connections
- If there's a requirement to send a message from a user on one Jabber
- server to a user on another Jabber server, we need a way for the
- servers to connect to each other. This component establishes and
- manages server-to-server connections and is known internally as
- As in any server system, the ability to log events (error messages,
- notices, alerts, and so on) is essential. The logging component allows
- us do this.
- Data Storage
- There will be some server-side storage requirements, for example, to
- hold user authentication information and data such as last connect
- time (not to mention storage of rosters, personal details, and private
- data). The Data Storage component does this for us. It is known
- internally as the xdb component. xdb stands for
- "XML Database."
- Hostname Resolution
- Last but not least, we may need some way to resolve names of hosts
- that the Jabber server doesn't recognize as "local," as in the Server
- (to Server) connection context. This component is known internally as
The relationship between jabberd and the components is shown in Figure 4-1. These components are the engines that serve and process XML messages, providing their services, and jabberd is the backbone along which messages are routed.
As seen in Figure 4-1, the
jabberd backbone acts as the central artery or hub, managing the peripheral that are attached to it. The management of these components encompasses controlling and overseeing how they connect and coordinatingthe flow of data between them. Certain types of components receive only certain types of data. There is a distinction made between three different types of component:
The different component types handle different types of data packets. Each packet is in the form of a distinct, fully formed XML fragment and is identified by the outermost element name in the XML fragment. This element name is matched up to a particular component type.
The log components
The log components handle <log/> packets; you can guess that these are the components that provide logging services.
On receipt of a <log/> data packet, a logging component will (hopefully) do something useful with it, like write it to a file or to STDERR.
The <log/> packet shown in Example 4-1 is being used to record the successful connection and authentication of user dj, on yak, using the Jabber client JabberIM.
A <log/> packet
|<log type='record' from='dj@yak'>login ok 192.168.0.1
The xdb components
The xdb components handle <xdb/> packets. The <xdb/> packets carry data and storage/retrieval requests to and from the xdb components that provide the Data Storage services.
On receipt of an <xdb/> data packet, an xdb component will retrieve data from or write data to a storage system such as a collection of flat files or an RDBMS.
The <xdb/> packet shown in Example 4-2 is carrying a request from the session manager to retrieve the preferences stored in a private namespace for the user dj (on Jabber server yak by the Jabber client JabberIM).
An <xdb/> data packet
|<xdb type='get' to='dj@yak' from='sessions'
The service components
The service components handle the three main building blocks on which the Jabber functionality is based (the <message/>, <presence/>, and <iq/> packets). You can find out more about these building blocks in Chapter 5.
In addition, service components also handle the <route/> packets, which are used internally by jabberd to move packets around between components. For example, the Session Management component is the component that usually handles client authentication. It receives any incoming authorization requests received by and passed on from the Client (to Server) Connections component. However, it may be that the administrator has configured the Jabber server to use a different (third-party) component, developed by another group or company, to handle the authorizations. In this case, the request is routed from one component (Session Management) to another (the third-party authorization component).
So unlike the log and xdb components, which handle data packets whose element names match the component type (<log/> and <xdb/>), the service component is an umbrella component designed to handle packets with different element names (<iq/>, <message/>, <presence/>, and <route/>). Example 4-3 shows two typical service packets.
Two service packets
|<route to='dj@yak/81F2220' from='15@c2s/80EE868'>
<presence> <status>Online</status> </presence>
<message id="jim_id_7" to="sabine@merlix" type="chat"> <x xmlns="jabber:x:event"> <composing/> </x> <thread>3A378DF2B70F6A53A9C317CF526C6B7A</thread> <body>Hi there</body> </message>
The first is an internal <route/> packet, which is carrying a <presence/> packet from the Client (to Server) Connections component, identified by the c2s part of the from attribute, to the Session Management component (where the session identifier 81F2220 is significant in the to attribute). This identifier is a hexadecimal representation of the user's session ID within the JSM, carried internally as a JID resource in the routing information. (The 15 is the identifier for the socket on which the pertinent client connection has been made.)
The second is a <message/> packet, which contains the message itself ("Hi there"), as well as other information (a message event request and a conversation thread identifier; these are examined in detail in Part II of the book, particularly in Chapter 5 and Chapter 6).
It isn't necessarily the case, however, that all xdb components will handle all <xdb/> packets or all service components will handle all <presence/> packets. The configuration, described later in this chapter, determines how the components announce themselves and state their readiness to receive and handle packets.
The phrase delivery tree is often used in Jabber terminology to signify a component or components that handle certain types of packet. The path a packet makes as it descends the collection of decision branches that guide it to the component or components that will handle it can be described in terms of such a delivery tree. For example, an xdb component type is sometimes referred to as an xdb Delivery Tree. Considering the division of components into different types that handle different packet types is perhaps easier to visualize as a tree, as shown in Figure 4-2.
The Jabber Delivery Tree shows which component types can handle what
sorts of packets in the Jabber world. Furthermore, the log packets are
distinguished by their log type (error, notice, or
record), the xdb packets are distinguished by namespace, and
all the packets are distinguished by hostname. This means, for example,
multiple xdb components can be organized, and configured, to handle
packets qualified by different namespaces, and intended for different
hosts, while different log components can be set up to handle different
log packet types. We'll see examples of this configuration later in this
Component Connection Methods
The notion of components providing distinct services and being coordinated by a central mechanism (jabberd) suggests a certain amount of independence and individuality--a plug-in architecture--and that is what Jabber is. The components described earlier, and others too, are "plugged in" to the Jabber backbone according to the requirements of the server.
The idea is that once you have the basic services like Session Management, Client (to Server) Connectivity, and Data Storage, you plug in whatever you need to suit the server's requirements. For example if you need conferencing facilities, you can plug in the Conferencing component. If you need user directory facilities, you can plug in the Jabber User Directory (JUD) component. If you need a bridge to the Yahoo! Instant Messaging system, you can plug in the Yahoo! Transport component. You can also connect to a component running on another Jabber server, as you'll see later in this chapter. You can write your own components and plug those in as well to provide services not available off the shelf. We build our own components in Section 9.3 in Chapter 9 and Section 10.3 in Chapter 10.
Components are plugged in to the Jabber server backbone in one of three ways:
- Library load * TCP sockets * STDIO Let's examine each one in turn.
The core components of a Jabber server providing IM services are connected using the library load method. This simply means that the component sources are compiled into shared object (.so) libraries and loaded into the main Jabber process (jabberd).
The components are written specially with the Jabber backbone in mind and contain standard component registration routines that utilize functions in the core Jabber libraries. These routines are used to bind the component relationship with jabberd (for example, there is a "heartbeat" mechanism through which the components are monitored) and to specify packet receipt requirements.
The library load method, which is also sometimes known as dynamic load, is represented in the configuration by the <load/> tag, which wraps the library (or libraries) that should be loaded. Example 4-4 and Example 4-5 show how components can be plugged into the standard jabber.xml file using the library load. Example 4-4 shows the Client (to Server) Connections (c2s) component, which has been written and compiled as an .so library, being connected using the library load method.
Loading of the c2s component with library load
In this example, we see the simpler form of the <load/> tag. Inside the tag we have:
which specifies two things:
- Which library to load (in this case, ./pthsock/pthsock_client.so).
- The name of the component registration routine that should be called
by jabberd once the library has been loaded. The name of the routine is the name of the tag that wraps the library filename; in this example, it's pthsock_client(), denoted by the <pthsock_client/> tag. The second library load example, Example 4-5, shows multiple .so libraries being loaded when a component is connected--the form of the <load/> tag is slightly more involved.
Loading of the JSM component with library load
|<load main="jsm"> <jsm>./jsm/jsm.so</jsm>
Here we see multiple libraries being loaded to form the Session Management (the JSM) component, known as jsm.
This is what happens in a library load situation in which multiple libraries are involved:
- jabberd loads the library in the tag that's pointed to by the
- main attribute of the <load/> tag; in this
- example, it's the library ./jsm/jsm.so:
- jabberd then invokes the registration routine called jsm(),
- according to the name of the tag, as before. The JSM loads the rest of
- the modules defined within the <load/> tag
- (mod_echo, mod_roster, mod_time, and so
- on), invoking each module's registration routine (mod_echo(),
- mod_roster(), mod_time(), and so on) as they're
In case you're wondering, all the modules that belong to the JSM are actually compiled into a single .so library, which is why all the .so filename references in Example 4-5 are the same.
Another method for connecting components to the Jabber backbone, in fact the most flexible method, uses a TCP sockets connection. This means that a component connected in this way can reside on the same or a different server than the one running jabberd. So instead of being loaded directly into the jabberd backbone, TCP sockets-connected components exist and run as separate entities and can be started and stopped independently.
The configuration syntax for defining a connection point for a component that is going to connect to the backbone via TCP sockets looks like this:
The name for this TCP sockets tag configuration stanza is <accept/>, reflecting the low-level socket library call accept() to which it directly relates. As <load/> is to the library load method, so <accept/> is to the TCP sockets method.
The <accept/> tag usually has three child tags: <ip/>, <port/>, and <secret/>. There is a fourth tag <timeout/> with which you can control the heartbeat monitor of this component connection, which defaults to a value of 10 (seconds) if not explicitly specified (it seldom is).
To configure a TCP sockets stanza, specify an IP address (or hostname) and port to which the component will connect. If you want the socket to be network interface independent, you can write <ip/> (an empty tag) to listen on your specified port on all (INADDR_ANY) IP addresses. The <secret/> tag is used in the handshake when the component connects to the backbone, so that it can be authenticated.
More information on connecting components with <accept/> can be found in Part II, in Section 126.96.36.199.
Standard I/O (STDIO)
The TCP sockets component connect method is used to connect an external component to the Jabber backbone via a socket connection through which streamed XML documents are exchanged. There is another way for components to connect and exchange XML document streams with the Jabber backbone--using the STDIO connection method.
While the TCP sockets method requires external components to be independently started and stopped, the STDIO method represents a mechanism whereby the jabberd process starts the external component itself. The component to start is specified inside an <exec/> tag. (Indeed the STDIO method is also known as the exec method.) Example 4-6 shows how the STDIO method is specified in the configuration.
Invoking an external component with STDIO
|<exec>/path/to/component.py -option a -option b</exec>
Here we see that the component is a Python program and is being passed some switches at startup.
So where's the socket connection in this method? There isn't one. The XML documents are exchanged through standard I/O (STDIO). The component writes XML fragments to STDOUT, and these are received on the Jabber backbone. The component receives XML fragments destined for it on STDIN, fragments that are written out from the Jabber backbone.
Just as a component connected using the TCP sockets method sends an opening document fragment, the component connected with this STDIO method sends an opening document fragment to initiate a connection and conversation:
|<?xml version="1.0"?> <stream:stream
Notice how the namespace that describes this type of conversation is:
No secret is required in this case because it is assumed that the component can be trusted if it is specified in the configuration and execution is initiated by jabberd itself.
At this stage, we should be fairly comfortable with the notion of a jabberd backbone and a set of components that combine to provide the features needed for a complete messaging system. We've looked at fragments of configuration in the previous section; now we'll examine the configuration directives in more detail.
It's not uncommon for people installing a Jabber server for the first time to be daunted (I was terrified!) by the contents of the jabber.xml configuration file. But really, for the most part, it's just a collection of component descriptions--what those components are, how they're connected, what packets they are to process, and what their individual configurations are.
There's a concept that encompasses Jabber's configuration approach that is taken from the object-oriented (OO) world--the concept of objects (and classes) and instances thereof. In Jabber server configuration, specifically the description of the components that are to make up a particular Jabber server, we talk about instances of components, not components directly.
In other words, a component is something generic that is written to provide a specific service or set of services; when we put that component to use in a Jabber server, we customize the characteristics of that component by specifying detailed configuration pertaining to how that component will actually work. We're creating an instance of that component.
A Typical Component Instance Description
Each component instance description follows the same approximate pattern:
- Declaration of the component type
- Identification (name) of the component
- Specification of the host filter for packet reception
- Definition of how the component is connected
- Custom configuration for the component
Of course, for any generalized rule, there's always an exception. The log component type, as mentioned earlier in this chapter, is defined slightly differently --while there is a host filter defined, a component connection definition is neither relevant nor present, and the custom configuration is limited; we'll see this later when we take a tour of the jabber.xml.
Let's have a closer look at the Client (to Server) Connections (c2s) component and how an instance of it is specified in the jabber.xml. We're going to use the one that is delivered in the Jabber 1.4.1 server distribution tarball. Example 4-7 shows how the c2s is defined. The definition includes details of how the component code is connected (using the library load method) and contains some custom configuration covering authentication timeout (the <authtime/> tag), traffic flow control (the <karma/> section), and what port c2s is to listen on (the <ip/> tag). We'll look at these custom configuration tags in detail later.
The c2s instance configuration in jabber.xml
<code>|<service id="c2s"> <load> <pthsock_client>./pthsock/pthsock_client.so</pthsock_client> </load> <pthcsock xmlns='jabber:config:pth-csock'> <authtime/> <karma> <init>10</init> <max>10</max> <inc>1</inc> <dec>1</dec> <penalty>-6</penalty> <restore>10</restore> </karma> <ip port="5222"/> </pthcsock> </service>
Now let's arrange this instance configuration in diagram form. Figure 4-3 highlights the pattern we're expecting to see.
instance descriptions in this way, it's easy to understand how the configuration is put together, and we can begin to see the pattern emerging. Taking each of the elements of the pattern in turn, let's examine what the XML tells us.
The component type is service. We know that from looking at the outermost tag in the XML:
|<service id="c2s"> ... </service>
So we know that this component instance will handle <message/>, <presence/>, <iq/>, and <route/> packets.
Each component instance must be uniquely identified within the space of a single Jabber server (configuration). jabberd uses this identification to address the components and deliver packets to the right place. In this case, the identification of this component instance is c2s; it's taken from the id attribute of the component type tag:
The diagram shown in Figure 4-3 states "none specified" for the host specification--the host filter. So what happens now? Well, a host filter is usually one or more <host/> tags containing hostnames to which the component instance will "answer." It's a way of specifying that packets destined for a certain hostname will be received by that component instance.
However, if there are no <host/> tags specified as in this c2s example, then the component instance's identification is taken as the hostname specification. In other words, the <service id="c2s"> declaration in this example, coupled with the lack of any explicit <host/> tag, implies a host filter of c2s. This component instance wants to receive all packets with addresses that have c2s as the hostname. It's the equivalent of this host filter specification:
The <host/> tag
There is some degree of flexibility in how you specify a hostname with the <host/> tag.
You can specify an absolute hostname like this:
You can specify more than one hostname like this:
For example, if this pair of <host/> tags appeared in an instance specification for the Conferencing component, you could address the component instance using either hostname.
You can use a wildcard character to specify all hostnames within a domain, for example:
will match on all hosts with the domain name pipetree.com.
If you want the component instance to receive packets regardless of the hostname, you can specify an empty tag thus:
Component connection method
What is the component? Where do we load it from, or how does it connect to the Jabber backbone? There is a component connection method (see Section 4.1.3, earlier in this chapter) specified in each of the component instance definitions. In our example of the c2s component instance, we see that the library load method is being used to load the ./pthsock/pthsock_client.so shared object library and that the component registration routine pthsock_client() should be called once loading is complete:
Once we've dealt with the (optional or implied) <host/> tag hostname filters and the component connection method, all that is left is the custom configuration for the component instance itself. This will look different for different components, but there is still a pattern that you can recognize. The configuration always appears in a "wrapper" tag that, like the <host/> and <load/> tags earlier, appears as an immediate child of the component type tag (that's <service/> in our c2s example):
|<service id="c2s"> ... <pthcsock
xmlns='jabber:config:pth-csock'> ... [configuration here] ...
There are two things to note here:
- The tag name (<pthcsock/>) * The namespace declaration
(xmlns='jabber:config:pth-csock') The important part of the configuration wrapper tag is the namespace declaration:
because that is what the component actually uses to search for and retrieve the configuration.
As for the actual configuration elements for the c2s component instance that we see here (<authtime/> and <karma/>), we'll take a look at them in Section 4.6.
A Tour of jabber.xml
Now that we know what patterns to look out for, we're well prepared to dive into a jabber.xml configuration file. As an example, we'll take one that's very similar to the default jabber.xml installed with Version 1.4.1 of Jabber, but we'll plug in some extra components: the conferencing component and a local JUD component.
The entire configuration content, with comment lines dividing up each section, can be found in Appendix A. It's definitely worth turning briefly to have a look at the XML before continuing, to get a feel for how the configuration is laid out.
In order to deal with it without going crazy, let's break down the XML into manageable chunks. We'll build configuration diagrams for each of the top-level tags that are children of the root tag <jabber/>. The opening tags for each of these chunks are as follows:
- <service id="sessions">
- <xdb id="xdb">
- <service id="c2s">
- <log id="elogger">
- <log id="rlogger">
- <service id="dnsrv">
- <service id="jud">
- <service id="s2s">
- <service id="conf">
Most of these should be recognizable by now, but there are two chunks that we haven't come across yet: <io> and <pidfile>. These aren't components but nevertheless are part of the configuration for jabberd; there are also the two Logging component instances that we have not paid much attention to until now.
Figure 4-4 provides an overview of how the Jabber server is configured. It represents the contents, in diagram form, of the jabber.xml configuration file in Appendix A.
We can see that the bulk of the Jabber server functionality described here is in the form of components. Let's take each of these components--the chunks--one by one and have a closer look. The remainder of this chapter describes each of these chunks in detail.
Component Instance: sessions
The sessions component, described by the configuration XML shown in Example 4-8 and shown in diagram form in Figure 4-5, provides Session Management features for users (the word "users" is employed in the widest possible sense--a user could be a person or a script) connecting with Jabber clients, through XML streams identified with the jabber:client stream namespace.
The component also provides the services that give Jabber its IM capabilities-- services such as roster management, message filtering, store-and-forward ("offline") message handling, and so on. These IM services are loaded individually as part of the component connection phase.
jabber.xml configuration for the sessions component instance
<jsm xmlns="jabber:config:jsm"> <filter> <default/> <max_size>100</max_size> <allow> <conditions> <ns/> <unavailable/> <from/> <resource/> <subject/> <body/> <show/> <type/> <roster/> <group/> </conditions> <actions> <error/> <offline/> <forward/> <reply/> <continue/> <settype/> </actions> </allow> </filter> <vCard> <FN>Jabber Server on yak</FN> <DESC>A Jabber Server!</DESC> <URL>http://yak/</URL> </vCard> <register notify="yes"> <instructions>Choose a userid and password to register.</instructions> <name/> <email/> </register> <welcome> <subject>Welcome!</subject> <body>Welcome to the Jabber server on yak</body> </welcome> <!-- <admin> <read>support@yak</read> <write>admin@yak</write> <reply> <subject>Auto Reply</subject> <body>This is a special administrative address.</body> </reply> </admin> --> <update><jabberd:cmdline flag="h">yak</jabberd:cmdline></update> <vcard2jud/> <browse> <service type="jud" jid="jud.yak" name="yak User Directory"> <ns>jabber:iq:search</ns> <ns>jabber:iq:register</ns> </service> <conference type="public" jid="conference.yak" name="yak Conferencing"/> </browse>
<load main="jsm"> <jsm>./jsm/jsm.so</jsm> <mod_echo>./jsm/jsm.so</mod_echo> <mod_roster>./jsm/jsm.so</mod_roster> <mod_time>./jsm/jsm.so</mod_time> <mod_vcard>./jsm/jsm.so</mod_vcard> <mod_last>./jsm/jsm.so</mod_last> <mod_version>./jsm/jsm.so</mod_version> <mod_announce>./jsm/jsm.so</mod_announce> <mod_agents>./jsm/jsm.so</mod_agents> <mod_browse>./jsm/jsm.so</mod_browse> <mod_admin>./jsm/jsm.so</mod_admin> <mod_filter>./jsm/jsm.so</mod_filter> <mod_offline>./jsm/jsm.so</mod_offline> <mod_presence>./jsm/jsm.so</mod_presence> <mod_auth_plain>./jsm/jsm.so</mod_auth_plain> <mod_auth_digest>./jsm/jsm.so</mod_auth_digest> <mod_auth_0k>./jsm/jsm.so</mod_auth_0k> <mod_log>./jsm/jsm.so</mod_log> <mod_register>./jsm/jsm.so</mod_register> <mod_xml>./jsm/jsm.so</mod_xml> </load>
Component Type and Identification
The opening tag:
identifies this component instance to the backbone as a service type component and gives it a name (sessions) that can be used for internal addressing and to distinguish it from other component instances.
Assuming that our hostname isn't sessions, it's just as well that we have a <host/> specification in this component instance description:
which means that this Session Management component instance will handle packets addressed to the host yak.
The <jabberd:cmdline flag="h"> ... </jabberd:cmdline> wrapper around the hostname means that this value (yak) can be overridden by specifying a switch -h (hostname) when jabberd is invoked, as is described in Chapter 3. If you're sure you'll never want to override the hostname setting here, this <jabberd:cmdline/> wrapper can safely be removed from the configuration, to leave:
As described earlier, you can specify more than one hostname; use a <host>...</host> pair for each one. This will effectively give you a virtual server effect where Jabber will respond to different hostnames. This is useful in situations such as deployment in an ISP where a single host serves multiple domains. The client data stored on the server (such as rosters, offline messages, and so on) is stored by the xdb component by hostname, so that a separate directory in the spool area will be used for each specified hostname.
For example, if you specified the two hosts:
then the data for two users email@example.com and firstname.lastname@example.org would be stored as shown in Figure 4-6.
hostnames for the Session Management component instance will effect a sort of virtual hosting, with separate data storage as described, the rest of the features of the component will be identical. For example, this means that the list of available services that the client can request--the agent list (old terminology) or browse list (new terminology)--and the session features such as roster management, administration functions, private data storage, and so on will be identical. If you want to offer different services for different hostnames from the same Jabber server, see Section 4.16 later in this chapter.
In Section 4.2.2, we described the elements in this order: component type, component identification, host filter, connection method, custom configuration. Being XML, the configuration format is flexible enough to allow us to manage the ordering (but not the nesting!) of the configuration directives to suit our own layout purposes. In this instance, we come to the custom configuration--the connection method comes afterward.
The sessions component (i.e., the JSM) offers a lot of facilities, which means that in order to attach an instance of the JSM into our Jabber server we have a lot of configuring to do.
Our configuration wrapper tag for the JSM instance is:
The tag name jsm is simply representative of what the configuration pertains to; once loaded, the JSM will look for the configuration by the namespace identifier jabber:config:jsm. Within the wrapper tag, we have different sections that approximately relate to the different services that the JSM is going to provide.
The message filter service, provided by the mod_filter module, allows clients to set up mechanisms that can control and manage incoming messages as they arrive at the recipient's Jabber server--before they start on the final leg of the journey to the recipient's client.
The service allows each user to maintain her own filter, which is a collection of rules. A rule is a combination of conditions and actions. For each incoming message, the message filter service kicks in and goes through the rules contained in the message recipient's filter one by one, checking the characteristics of the incoming message using the conditions defined in each rule. If one of the conditions matches, then the action or actions defined in that rule are carried out and the message filter service stops going through the rules--unless the action specified is continue--in which case the service goes on to the next rule. The continue action makes it possible to chain together a complex series of checks and actions.
Figure 4-7 shows what a filter definition looks like.
user's filter is stored on the server using the xdb component (see later). What does a typical filter look like? Well, Example 4-9 shows a filter that contains two rules:
- Checks the message recipient's presence and sends a "holiday" notice
- back if the presence is set to Extended Away ("xa"--more
- detail on presence can be found in Section 5.4.2) and forwards the
- incoming message to a colleague.
- Checks to see if the message is from someone who exists in certain
- groups in the recipient's roster and if so sends an auto-reply to that
- person, sets the incoming message type to normal (in case it
- was a chat message), and allows the message to reach its
- original intended destination. : This could be useful in a customer
- support scenario in which the support representative could handle
- incoming queries in a queue of normal messages but have an
- auto reply sent out for each query telling the customer that her
- request will be dealt with shortly.
A message filter with two rules
|<query xmlns="jabber:iq:filter"> <rule
name="holiday"> <show>xa</show> <reply>I'm on
holiday - back on the 25th!</reply>
<forward>mycolleague@yak</forward> </rule> <rule
<group>CustomersSouth</group> <reply>Thanks - an
operator will attend to you shortly</reply> <continue/>
Note that there is no nesting or grouping to distinguish conditions from actions. In the first rule, holiday, there is one condition (<show/>) and two actions (<reply/> and <forward/>), and in the second rule, custreply, there are two conditions (two <group/>'s) and two actions (<reply/> and <continue/>).
There are a few things to note from this example.
The action represented by the <continue/> tag means that the filter checking will move on to the next rule that doesn't exist, meaning that the original message will still be delivered. No <continue/> would have meant that the message would have been dropped (that is, it wouldn't have reached its final original destination), because when a rule matches the actions in that rule are carried out and a successful delivery is implied.
The conditions are OR'ed together--f any of the conditions in a rule match, then the rule has matched and all actions defined in the rule are carried out.
So with this in mind, let's examine the message filter service configuration:
<max_size>100</max_size> <allow> <conditions>
<ns/> <unavailable/> <from/> <resource/>
<subject/> <body/> <show/> <type/>
<roster/> <group/> </conditions> <actions>
<error/> <offline/> <forward/> <reply/>
<continue/> <settype/> </actions> </allow>
Within the <filter/> configuration wrapper, we have three children: <default/>, <maxsize/>, and <allow/>.
- The <default/> tag allows the server administrator to
- specify default filter rules that will be applied for every user
- registered on that Jabber server. Specifying something like this: :
|<default> <rule name="server wide rule">
<from>email@example.com</from> <error>No spam
please, we're British!</error> </rule> </default>
- will effectively filter out all messages from our friendly spammer. :
- The rules specified in the <default/> tag will be
- appended to any personal rules the user may have defined himself.
- This is important when you consider the order in which the rules are
- tested and that, once a rule is matched, filter processing stops
- (unless the <continue/> action is used).
- Filter rule matching is expensive. We don't want to let the user go
- overboard with filter rules--we can place an upper limit on the number
- of rules in a filter with the <maxsize/> tag. (The
- default is large; anyone who can be bothered to create 100 rules
- deserves to have them all checked, in my opinion!)
- The <allow/> tag specifies the
- <conditions/> and <actions/> that a user
- is allowed to use in building rules. Table 4-1 and Table 4-2 show the
- possible filter conditions and actions.
|<ns/>||<ns>jabber:iq:version</ns>||Matches the||namespace (ns) of an <iq/> packet|
|<unavailable/>||<unavailable/>||Matches when the recipient's presence type is unavailable|
|<from/>||<from>firstname.lastname@example.org</from>||Matches the||sender's Jabber ID (JID): user@host|
|<resource/>||<resource>Work</resource>||Matches the||recipient's resource|
|<subject/>||<subject>Work(!)</subject>||Matches the||message's subject (in the <subject/> tag); must match||exactly|
|<body/>||<body>Are you||there?</body>||Matches the message content (in the||<body/> tag); must match exactly|
|<show/>||<show>dnd</show>||Matches the recipient's presence show--usually one of||normal (the default), chat, away,||xa (eXtended Away), or dnd (Do Not Disturb)|
|<type/>||<type>chat</type>||Matches the type of the incoming message (in the type||attribute) could be one of normal, chat,||headline, or error|
|<roster/>||<roster/>||Matches||whether the sender is in the recipient's roster|
|<group/>||<group>Friends</group>||Matches whether the sender is in a particular group in the||recipient's roster|
|<error/>||<error>Address||defunct</error>||Sends an error reply to the sender.|
|<offline/>||<offline/>||Stores the||incoming message offline. The recipient will receive it the next time||she logs on.|
|<forward/>||<forward>colleague@server</forward>||The||message will be forwarded to another Jabber ID (JID).|
|<reply/>||<reply>Be right||back!</reply>||A reply will be sent to the sender.|
|<settype/>||<settype>normal</settype>||Changes the type of||the incoming message (see <type/> in the previous||table).|
|<continue/>||<continue/>||Special||action to continue on to the next rule.|
Every user, indeed every entity, can maintain a virtual "business card"--a vCard--which is stored server-side. vCards can be retrieved at any time by any user. The <vCard/> tag here in the JSM configuration gives the Jabber server an identity--its vCard can be retrieved also.
You can maintain the server's vCard data in this part of the JSM configuration:
<code>|<vCard> <FN>Jabber Server on yak</FN> <DESC>A Jabber Server!</DESC> <URL>http://yak/</URL> </vCard>
All the vCard elements can be used for this vCard configuration, not just the ones shown here. More information on vCards can be found in Section 6.5.1.
Registration instructions such as those defined here:
|<register notify="yes"> <instructions>Choose a userid
and password to register.</instructions> <name/>
are available to whoever asks for them; in its most formal state, the procedure for creating a new user account on a Jabber server (specifically, in the JSM) includes a first step of asking the server what is required for the registration process.
The registration service is provided by the mod_register module.
In reply to such a request (which is made with an IQ-get request in the jabber:iq:register namespace--see Section 6.2.11 and Section 7.2 for details) the instructions and a list of required fields are returned by mod_register. Note that the list of fields provided in this <register/> section are over and above the standard fields in any case for registration:
so that in
this particular configuration case both <name/> and <email/>, and<username/> and <password/> will be sent in the reply. The text inside the <instructions/> tag, also sent, is intended for display by the client if it supports such a dynamic process. Typically the client would request the registration requirements and build a screen asking the user to enter values for the required fields, while displaying the instructions received.
The notify="yes" attribute of the <register/> tag will cause a message to be automatically created and sent to the server administrator address(es) for every new account created. See Section 188.8.131.52 for details about specifying administration addresses.
If you want to prevent registration of new accounts on your Jabber server, comment out this <register/> section. The only standard module that handles <iq/> packets in the jabber:iq:register namespace, mod_register, will refuse to handle register requests if there is no <register/> section in the configuration, and so a "Not Implemented" error will be sent in reply to the request for registration details.
The welcome message defined here:
<body>Welcome to the Jabber server on yak</body>
will be sent to all new users the first time they log on. The <subject/> and <body/> contents are simply placed in a normal <message/> and sent off to the new Jabber ID (JID).
While the Unix user acts as the overall administrator for the Jabber server (for starting and stopping jabberd, for example), it is possible to specify administration rights for certain Jabber users that are local to the server. "Local" means users that are defined as belonging to the host (or hosts) specified in the <host/> tag within the same JSM component instance definition. If the host tag is:
then the JID's email@example.com and firstname.lastname@example.org are local, but email@example.com is not.
The only difference between an administration JID and a "normal" JID is that the former is specified in tags in this section and the latter isn't. When a JID is specified between either the <read/> or <write/> tags, then it can be used to perform "administrative" tasks.
The <admin/> section as delivered in the standard jabber.xml that comes with Version 1.4.1 (see Appendix A) is commented out. Make sure that you remove the comment lines to activate the section if you want to make use of the administrative features:
<write>admin@yak</write> <reply> <subject>Auto
Reply</subject> <body>This is a special administrative
address.</body> </reply> </admin>
If you want to specify more than one JID with administrative rights, simply repeat the tags, like this:
Placing a JID inside of a <write/> tag implies that that JID also has <read/> administration rights. So there's not much point in doing something like this:
So what are the administrative features available to JIDs placed inside the <read/> and <write/> tags? For JIDs appearing in a <read/> tag in the <admin/> section, these are the features available:
- Retrieve list of users currently online
- By sending one of two possible types of query to the server, a JID can
- retrieve a list of users that currently have a session on the (local)
- Jabber server. The results come in one of two forms, depending on the
- query type. The first query version is of the "legacy"
- iq:admin type and the second is of the newer
- iq:browse type. (An example of the latter query can be seen
- in Example 5-1.) : The list of users in both sorts of results contains
- the user JID, for how long the user has been logged on (measured in
- seconds), how many packets have been sent from the user's session, and
- how many packets have been sent to the user's session. The first query
- version also contains presence information for each user in the list.
- Receipt of administrative queries
- Users normally send messages to other users--to other JIDs, where a JID
- is composed of a username and a hostname (a Jabber server name). The
- Jabber server itself is also a valid recipient, and the JID in this
- case is just the server name itself: no username and no @
- sign. : If a user sends a message to the server, it will be forwarded
- to the JIDs listed in the <read/> (and
- <write/>) tags in this <admin/> section,
- and the reply defined in the <reply/> tag will be sent
- back to the user as an automated response. For JIDs appearing in a
- <write/> tag in the <admin/> section,
- these are the features available:
- Same as <read/>
- JIDs listed in <write/> tags automatically have access
- to the same features as those JIDs listed in <read/>
- Configuration retrieval
- In a similar way to how a list of online users can be requested by
- sending a query of the iq:admin variety, a copy of the JSM
- configuration can be requested by sending an iq:admin query
- to the server. The difference is that in the former user list request,
- a request tag <who/> is sent inside the query, and in
- this configuration request, a <config/> tag is sent. :
- The configuration XML, as it is defined in the JSM component instance
- section of the Jabber server being queried, is returned as a result.
- Sending administrative messages
- Two types of administrative messages can be sent: an announcement to
- all online users and a message of the day (MOTD). The announcement
- goes out to all users currently online. Similarly, the MOTD goes out
- to all users, but not only those online; when someone logs on and
- starts a session, the MOTD will be sent to them too, unlike the
- announcement, which will expire as soon as it is sent. The MOTD will
- not expire, unless explicitly made to do so. The MOTD can also be
- updated--those that had already received the MOTD won't receive the
- updated copy during their current session, but anyone logging on after
- the update will receive the new version of the message.
Update info request
The mod_version module provides a simple service that, at server startup, queries a central repository of Jabber software version information at update.jabber.org. The <update/> configuration tag:
is used to control this query.
If the <update/> tag is present, the query is sent. If the update tag is not present, the query is not sent.
If you do intend leaving the <update/> tag in, you need to make sure that:
- The hostname specified as the value in the tag is resolvable and
reachable as this is your Jabber server address to which the central repository will try to send back information (if there happens to be a newer version of the server software--specifically the JSM component--available). * Your Jabber server is connected to the Internet to be able to reach update.jabber.org. You also need to be running instances of the Hostname Resolution and Server (to Server) Connections components so that your Jabber server can resolve the update.jabber.org host and send the query out. The JSM component version releases are fortunately not so frequent that you require an automated mechanism to keep up with what's new; also you may wish to run an internal Jabber server with no connection to the outside world. So it is not uncommon for this section to be commented out. The JSM will still function without this piece of configuration.
It is worth noting here, however, that Jabber clients also use the central repository to find out about newer versions of themselves. As all Jabber client communication goes through the server,you need to realize that commenting out the <update/> tag will not stop clients sending their queries.
Autoupdate of JUD
The Jabber User Directory (JUD) is a service that provides a directory service of usernames and addresses. The service comes in the form of a component--we'll be looking at the component instance definition of a JUD later in Section 4.11 later in this chapter. If a Jabber server is running a JUD service, then you can connect to it with your Jabber client and enter your name and address details and query it as you would any directory service to find details of other people.
At the same time, each user has the possibility of maintaining his own vCard--we discussed vCards earlier in Section 184.108.40.206. In the same way that the server's vCard can be requested and retrieved, you can request a user's vCard, and the user whose vCard is requested does not have to be connected at that moment for the request to be fulfilled--the vCards are stored server-side and the Jabber server (not the user's client) handles the request.
So in many ways it makes sense to align the data in the user vCard with data stored in a JUD. The <vcard2jud/> configuration tag allows this alignment to happen automatically; if it appears in the configuration, it will cause any vCard updates (that would be typically performed by users changing their personal information via their Jabber clients) not only to be stored server-side in the vCard but also to be passed on to a JUD.
Which JUD? Well, the first one that's defined in the <browse/> section of the configuration, which is described next. Effectively it means that if you run a local JUD but also connect to the JUD running on jabber.org, you can choose which JUD will be the recipient of the vCard updates by placing that one before any others in the <browse/> list.
If you're not running a JUD locally, or you simply don't want your users' vCard updates going to a JUD, you can safely comment this tag out.
Browsable service information
As the Jabber server administrators, we know what services are available on our Jabber server: what components are connected and what features they offer. We know that we're running a JUD locally and have a Conferencing component.
But how do we let the Jabber clients know? If they're to be able to provide their users with an agreeable experience and expose them to all the server features and services available, we need some way to allow them to request information about what the server that they're connected to offers. Jabber has a powerful feature called browsing that allows one entity to query another entity for information. Browsing defines a simple request/response exchange and with that provides a singular and uniform way to retrieve (on the requester's part) and expose (on the requestee's part) feature information and availability.
Bearing that in mind, we can guess what the <browse/> section of the JSM custom configuration is for:
|<browse> <service type="jud" jid="jud.yak" name="yak
User Directory"> <ns>jabber:iq:search</ns>
<ns>jabber:iq:register</ns> </service> <conference
type="public" jid="conference.yak" name="yak Conferencing"/>
Each child of the <browse/> tag defines a feature, in this case a "service," that the Jabber server offers. Of course, these services are the ones over and above the services provided by the basic components such as Session Management, Hostname Resolution, and so on.
Two services are defined ("exposed") in the <browse/> configuration:
- A local JUD: *
|<service type="jud" jid="jud.yak" name="yak
User Directory"> <ns>jabber:iq:search</ns> <ns>jabber:iq:register</ns> </service>
- And a conferencing service: *
jid="conference.yak" name="yak Conferencing"/>
The browsing features are covered in Part II, but briefly we can see here that each browsable item is identified by a JID (jid="jud.yak" and jid="conference.yak") and is classified using a category that is the combination of the item's outermost tag and the value of the tag's type attribute. So the JUD is classified as service/jud and has a JID of jud.yak, and the conferencing service is classified as conference/public and has a JID of conference.yak. The type and jid attributes are required. Each item has an optional name attribute for use when the item is displayed, for example.
Some services offer well-known facilities such as search and registration, which are commonly found across different services. These facilities can be described directly in the browse item, so that the entity requesting information about services receives information directly in the first request "hit" as to what facilities are available for each service:
The ns in the facility tagname (<ns/>) stands for namespace; it is via namespace-qualified requests to a service that features are utilized. In this case, the search facility is represented by the jabber:iq:search namespace, and the registration facility is represented by the jabber:iq:register namespace.
Component Connection Method
Phew! Now that we've got the configuration out of the way, we can have a look at how the JSM is loaded. And we can see immediately from the <load/> tag that it's connected using the library load method:
|<load main="jsm"> <jsm>./jsm/jsm.so</jsm>
It's clear that the more complex version of the library load method is employed. The jsm module itself is loaded through the <jsm>...</jsm> tag pair and this in turn pulls in the other modules that are specified with the mod_* module name tag pairs.
We've already become aquainted with some of the modules in this list; here's a quick summary of the modules that will be loaded into the JSM:
- This module provides a simple echo service that echoes back whatever
- you send it.
- This module provides roster management services; the roster is stored
- You can request that the server send you a timestamp local to the
- server; this is the module that handles this request.
- This is the module that handles requests for the Jabber server's vCard
- and also the user vCard management (such as submission to a JUD on
- change and storing/retrieving the data from the server-side storage).
- mod_last provides facilities for returning last logout information
- for users or, in the case of a query on the server itself, server
- This is the module that provides the version query service described
- in Section 220.127.116.11.
- The serverwide announcements and MOTD facilities available to Jabber
- server administrators are provided by this module.
- The mod_agents module responds to requests for agent
- information made to the server. This is the module that returns the
- information in the <browse/> tag in the JSM
- configuration. It can also return a summary of the server consisting
- of the server's vCard and whether new user registrations are open. :
- When returning <browse/> data, it gives similar
- information to mod_browse (see the next entry) and is provided for
- backward compatibility. The agent information is requested with two
- namespaces, jabber:iq:agent (for information on the server)
- and jabber:iq:agents (for information on a list of
- "agents"--the old name for "services"); these namespaces are being
- retired in deference to the new jabber:iq:browse namespace.
- The mod_browse module responds to browsing requests made on the
- server or on users defined on that server. The module can also be used
- by users to modify the information returned if a browse request is
- made against them.
- This module provides the administrative features described in Section
- 18.104.22.168. The module itself determines which JIDs are allowed access to
- which features (according to the configuration in the
- <admin/> configuration block).
- The services described in Section 22.214.171.124 are provided by this module.
- Being offline--which in this sense means not being connected to the
- Jabber server and having an (online) session--doesn't prevent a
- user receiving messages. They are merely stored offline and forwarded
- to the user when she logs on and starts a session. mod_offline
- provides these storage and forwarding services in conjunction with the
- xdb component. See Section 4.5.
- The management of presence information--whether a user is online or
- offline, what his presence settings currently are, who should be sent
- the information, and so on. These facilities are provided by the
- mod_presence module.
- Authentication must take place when a user connects to the Jabber
- server and wishes to start a session. There are currently three types
- of authentication supported by the Jabber server; the differentiation
- is in how the password exchange and comparison is managed:
- User passwords are stored in plaintext on the server and are
- transmitted from the client to the server in plaintext. A simple
- comparison is made at the server to validate. : If the connection
- between the client and the server is encoded using SSL, then the
- plaintext password travels through an encrypted connection.
- User passwords are stored in plaintext on the server, but no password
- is transmitted from the client to the server; instead, an SHA-1 digest
- is created by the client from the concatenation of the client's
- session ID and the password and sent to the server, where the same
- digest operation is carried out and the results compared.
- zero knowledge
- User passwords are neither stored on the server nor transmitted from
- the client to the server. A combination of hash sequencing on the
- client side with a final hash and comparison on the server side allows
- credentials to be checked in a secure way. : There are three
- mod_auth_* modules, one for each of these authentication types.
- More information on the authentication methods can be found in Section
- 7.3 in Chapter 7.
- mod_log simply records the ending of each user session.
- The mod_register module provides the services to register (create
- a new user), unregister (remove a user), and maintain user
- registration details with the server.
- Storage and retrieval of private and shared (public) data by users is
- made possible by this module.
Component Instance: xdb
The xdb component, described by the configuration XML shown in Example 4-10 and shown in diagram form in Figure 4-8, provides data storage for the server--it is the XML Database.
All storage requirements by components connected to the Jabber backbone can be fulfilled by an xdb component. In normal configurations, there is a single instance, although it is possible to have more than one, each handling separate areas of storage, possibly using different storage mechanisms.
jabber.xml configuration for the xdb component instance
<load> <xdb_file>./xdb_file/xdb_file.so</xdb_file> </load>
<xdb_file xmlns="jabber:config:xdb_file"> <spool><jabberd:cmdline flag='s'>./spool</jabberd:cmdline></spool> </xdb_file>
Component Type and Identification
The opening tag identifies this component instance to the backbone as an xdb type component, as follows:
This gives it a name, xdb, much in the same way that the sessions service has the name sessions.
For the host filter, we have an empty tag:
specified, which signifies that this xdb component instance will answer data storage and retrieval requests for all hosts. This, in turn, means that all data to be stored server-side will be stored using the same data storage mechanism, in this case xdb_file, which is a simple lowest common denominator storage system based upon directories containing files with XML content; these files are at a ratio of one per JID, plus "global" files where storage of data not tied to a JID is required. An example of this would be JUD's usage of xdb (and implicitly xdb_file in our configuration); a file called global.xml is used to store the user directory information that JUD manages.
If you want to use separate data storage mechanisms for your different virtual servers, you can define more than one xdb instance in your jabber.xml configuration and have the first use one storage system--say, xdb_file--and the second use another--say, a Relational Database Management System (RDBMS)-based system.
You may also want to store data from different virtual hosts in different places on your system; by specifying more than one xdb instance, even if all of them use the same storage mechanism, you can specify a different spool directory in the configuration for each one.
As well as a host filter, there is another filter possible for xdb components. This is the namespace filter, represented by the <ns/> tag.
Every xdb storage and retrieval request is made with a namespace definition; for example, to retrieve the last logoff time for a user, the mod_last module makes a data retrieval request of the xdb component and specifies the jabber:iq:last namespace in that request, and to check if a user is using the zero-knowledge authentication method, the mod_auth_0k module makes a data retrieval request and specifies the jabber:iq:auth:0k namespace.
If you want an xdb component instance to handle only requests qualified with certain namespaces, specify them with the <ns/> tag. Example 4-11 shows the initial part of an xdb component instance definition that is to handle jabber:iq:roster and jabber:iq:last qualified storage and retrieval requests for the host a-domain.com.
Host and namespace filters in an xdb definition
<host>a-domain.com</host> <ns>jabber:iq:roster</ns> <ns>jabber:iq:last</ns> ...
No namespace filter in an xdb component instance definition implies the instance is to handle requests qualified by any namespace, the equivalent of an empty tag:
The custom configuration section in our xdb component instance definition is specific to the data storage mechanism that we're going to be using. In this case, it's the xdb_file mechanism, and so we have the custom configuration wrapped by a tag qualified with a namespace to match:
Again, the tag name xdb_file is unimportant; the part that must be correct is the namespace jabber:config:xdb_file.
The configuration describes a single setting: where the spool area is. This, in the context of our xdb_file mechanism, is the root directory within which hostname-specific directories are created and used to store JID-specific and global XML datafiles. As the configuration stands here, the value (./spool) can be overridden at server startup time with the -s switch.
There is another configuration tag available for use here too. Using the configuration as it stands here, the xdb_file component would cache data indefinitely; if you were to modify data directly in the files in the spool area, the modifications wouldn't have any effect for data that had already been retrieved for a JID in the course of a server's uptime. In other words, once data has been read from the file, it is cached until the server is stopped.
The <timeout/> configuration tag can be employed to control this caching. Used with a value that represents a number of seconds, the <timeout/> tag will force data in the cache to be purged (and therefore reread from file the next time it's requested) after that number of seconds of lifetime. Table 4-3 shows the effects of various values on caching.
|Less than 0||<timeout>-1</timeout>||No cache||purge will be carried out and the cached data will live forever. This||is the equivalent of having no explicit <timeout/> tag||set.|
|0||<timeout>0</timeout>||Cache will be purged||immediately. This is the same as having no cache.|
|More than 0||<timeout>120</timeout>||Cached||data will be purged after a certain lifetime specified (in seconds) as||the value of the tag. In this example, it's 2 minutes. Don't bother||setting a positive value less than 30; the cache purge check mechanism||runs only every 30 seconds, so any resolution beyond 30 is||meaningless.|
Component Connection Method
The component connection method is library load:
The shared library ./xdb_file/xdb_file.so is loaded and the xdb_file() function is called to initialize the component.
Component Instance: c2s
The c2s component, described by the configuration XML shown in Example 4-12 and shown in diagram form in Figure 4-9, provides the Client (to Server) Connections service--it manages Jabber client connections to the Jabber server.
jabber.xml configuration for the c2s component instance
<load> <pthsock_client>./pthsock/pthsock_client.so</pthsock_client& gt; </load>
<pthcsock xmlns='jabber:config:pth-csock'> <authtime/> <karma> <init>10</init> <max>10</max> <inc>1</inc> <dec>1</dec> <penalty>-6</penalty> <restore>10</restore> </karma> <ip port="5222"/> </pthcsock>
Component Type and Identification
The opening tag:
identifies this component instance to the backbone as a service type component and gives it the name c2s.
The c2s component has no explicit <host/> tag; the identification of the service with the id attribute is enough, and the value of the host filter will be taken as that identification. As long as the specified ID is unique within the context of the whole configuration, the component will be able to function correctly. It is normally set to c2s by convention.
The custom configuration for our c2s component contains three main tags:
- The first tag, <authtime/>, allows us to specify a time
- limit within which the connecting client has to have completed the
- authentication procedure. This includes sending the initial document
- stream identifier with the jabber:client namespace. Setting
- this to, say, 10 seconds: :
- will allow the client up to 10 seconds to authenticate, after which
- c2s will drop the connection. Setting the time limit to 0
- seconds, which can be accomplished with an empty tag: :
- effectively gives the client an unlimited amount of time within which
- to authenticate.
- The next tag we find in the c2s component instance
- configuration is <karma/>. This is a way of controlling
- bandwidth usage through the connections and will be explained in
- Section 4.13 later in this chapter.
- Then we come to the <ip/> configuration tag: :
- The standard port for client connections is 5222. This is where it is
- specified--in the port attribute. The <ip/> tag
- itself can contain an IP address or hostname. If you specify something
- like this: :
- then only socket connections will be made to that specific combination
- of port and IP address. Not specifying an IP address means that the
- c2s service will bind to the port on all
- (INADDR_ANY) IP addresses on the host. : You can specify more
- than one combination of port and IP address using multiple
- <ip/> tags: :
|<ip port="5222"> <ip port="5225">127.0.0.1</ip>
- which here means client socket connections will be listened for on
- port 5222 on any IP address and port 5225 on the localhost
- address. Three other configuration tags--<rate/>,
- <alias/>, and <ssl/>--are not used here
- but are worth identifying now:
- <rate/>, like <karma/>, is used to
- control connectivity and will be explained along with that tag in
- Section 4.13 later.
- <alias/> is a way of providing alias names for your
- Jabber server. When a Jabber client makes a connection, the opening
- gambit is the root of the XML document that is to be streamed over the
- connection: :
|<stream:stream to="furrybeast" xmlns="jabber:client"
- The use of furrybeast may be a DNS alias for yak and is
- specified by the client here in the to attribute of the
- document's root tag (<stream/>). : With the
- <alias/> tag, we can "fix" the incoming host
- specification by replacing it with what we as the server want it to
- be. If this document root tag were to be sent to our Jabber server
- configured as yak and we had an <alias/> tag thus:
- then the incoming hostname specification furrybeast would be
- recognized and translated to yak in the response: :
id='3AE71597' xmlns='jabber:client' from='yak'>
- Rather than specify a hostname to translate, a default alias name can
- be specified like this: :
- meaning that any connections to the c2s component would
- have their Jabber hostname specification translated to yak if
- necessary. This is an indication to the client that the hostname
- yak should be used in any reference to that Jabber server.
- <ssl/> is the equivalent of the <ip/>
- tag and works in exactly the same way, with two exceptions: * An IP
- address must be specified, which means that something like
- <ssl port="5223"/> is not allowed.
- The connections are encrypted using SSL, which means that the Jabber
server must have been configured to use SSL (see Chapter 3 and Section
4.13 later in this chapter for details).
Component Connection Method
The component connection method is library load:
The shared library ./pthsock/pthsock_client.so is loaded and the pthsock_client() function is called to initialize the component.
Logging Definition: elogger
It has already been intimated that the log type components are exceptions to the general pattern when it comes to defining what they are in relation to the Jabber server. In fact, the logging "components" aren't really separate components at all--they are part of the jabberd backbone. Nevertheless, it is still worthwhile referring to them as components as they can be defined with different characteristics to perform different logging tasks.
The configuration XML for elogger is represented in diagram form in Figure 4-10 and is shown in Example 4-13.
jabber.xml configuration for elogger
|<log id='elogger'> <host/> <logtype/>
<format>%d: [%t] (%h): %s</format>
<file>error.log</file> <stderr/> </log>
Component Type and Identification
The opening tag clearly denotes a log type component. The name given in the id attribute is elogger.
The elogger attribute will record log records for any hosts according to the empty host filter tag specified here:
Apart from the host filter declaration, every other tag within a <log/> definition can be regarded as configuration. Taking each tag in turn, we have:
- This tag declares which types of logging record will be handled by
- this logging definition. Actually, the <logtype/> tag
- is more of a filter (like <host/>) than configuration,
- but that's splitting hairs. : The tag can either be empty or contain
- one of the following values: alert, notice,
- record, or warn. You can specify more than one
- <logtype/> tag to capture more than one log type. If
- you specify an empty tag (as is the case with the log component here),
- then all log types will be captured and handled apart from any log
- types that are explicitly declared elsewhere in other logging
- components. What does this mean? Well, in our case, since we have a
- second log type component, rlogger (described in the next
- section), that has an explicit declaration: :
- this log component won't receive record type log records to
- A logging component will typically write out the data it receives in a
- human-readable format. With the <format/> tag, we can
- specify how the data appears. There are four variables that we can
- embellish with whatever text we like to form something that will be
- meaningful to us (and perhaps easily parseable for our scripts). The
- four variables are shown in Table 4-4.
|%s||The actual log message|
: In elogger's <format/> tag, we have: : %d: [%t] (%h): %s}}
- so a typical log record written by elogger might look like
- this: :
20010420T21:38:30: [warn] (yak): dropping a packet to yak from firstname.lastname@example.org/1.4.1: Unable to deliver, destination unknown}}
- Typically the output from a logging component goes to a file. You can
- specify the name of the file with the <file/> tag, as
- follows: :
- Additionally, it's possible to have the output from a logging
- component written to STDERR; place the empty <stderr/>
- tag in the logging component's definition to have this happen.
Logging Definition: rlogger
Logging definition elogger is a general catchall component that serves to direct all unhandled log records to a log file, error.log. The logging definition rlogger, on the other hand, has been defined specifically to capture and store (to a file--record.log) record type log records.
Components such as c2s, s2s, and sessions write record type log records, examples of which can be seen in Example 4-14. The "login ok" messages are from the c2s component, the "dialback" messages are from the s2s component, and the "session end" message is from the sessions component.
Typical record type log records
|20010811T14:27:19 email@example.com login ok 126.96.36.199 home
20010811T14:28:25 firstname.lastname@example.org login ok 188.8.131.52 yeha
20010811T14:29:20 conference.jabber.org out dialback 3 184.108.40.206
gnu.mine.nu 20010811T14:29:20 update.jabber.org out dialback 3
220.127.116.11 gnu.mine.nu 20010811T14:29:20 gnu.mine.nu in dialback 16
18.104.22.168 conference.jabber.org 20010811T14:29:20 jabber.org out
dialback 55 22.214.171.124 gnu.mine.nu 20010811T14:35:25 email@example.com
session end 486 30 57 home 20010811T14:36:39 firstname.lastname@example.org login ok
126.96.36.199 home 20010811T14:56:50 gnu.mine.nu in dialback 2
The configuration XML for rlogger is shown in Example 4-15 and is represented in diagram form in Figure 4-11.
jabber.xml configuration for rlogger
|<log id='rlogger'> <host/>
<logtype>record</logtype> <format>%d %h
%s</format> <file>record.log</file> </log>
Component Type and Identification
Like elogger, rlogger is identified as a log type component. It takes its name from the id attribute.
Again, like elogger, this logging definition will handle log records for any hosts.
The custom configuration of rlogger is very similar to that of elogger, except that the target file is called record.log (the <file/> tag), the output format is slightly different (the <format/> tag), and no output to STDERR is desired.
Component Instance: dnsrv
The dnsrv component, described by the configuration XML shown in Example 4-16 and shown in diagram form in Figure 4-12, provides routing logic and name resolution for packets that are destined for a nonlocal component, in other words, for a component that is running on another Jabber server.
jabber.xml configuration for the dnsrv component instance
<load> <dnsrv>./dnsrv/dnsrv.so</dnsrv> </load>
<dnsrv xmlns="jabber:config:dnsrv"> <resend service="_jabber._tcp">s2s</resend> <resend>s2s</resend> </dnsrv>
Once started, the component forks to spawn a child process that services the actual name resolution requests and the route determination. The component and its child communicate with a simple XML stream within which hostnames are passed to the child process in a "query" tag:
and answers are passed back in the form of attribute additions to the original query tag:
Component Type and Identification
The component is a service and is identified with the name dnsrv:
The dnsrv component is to provide hostname resolution and routing for all component activity within the Jabber server. For this reason, it needs to be open to all comers and has an empty host filter tag (<host/>).
The dnsrv component provides hostname lookup and dynamic routing services. To this end, the configuration concerns itself with defining how the routing is to be determined.
The configuration, identified with the jabber:config:dnsrv
rvice requested. So far, so good; the resolution part of the deal is
covered. But what happens once an IP address has been returned? The packet destined for the nonlocal component must be sent on its way--but via where? This is what the next-delivery-point data specifies.
If we examine the configuration, we see this:
|<dnsrv xmlns="jabber:config:dnsrv"> <resend
The configuration is a list of services to try for during the resolution request. This list has two items. The first has the service="_jabber._tcp" attribute that says, "Try for the Jabber (via TCP) service when trying to resolve a name (using a SRV record lookup request) and, if successful, send the packet to the s2s component." The second is the default that says, "If you've reached here in the list and haven't managed to get a resolution for a particular service, just resolve it normally (using a standard resolver call) (gethostbyname()) and send it to the s2s component."
The value _jabber._tcp isn't Jabber configuration syntax; it's the prefix format to use with a domain name when making DNS SRV lookups. (See RFC 2782 for more details.)
Let's look at what happens in a typical request of dnsrv; to set the scene, a client connecting to the Jabber server to which this component instance is connected has requested software update information (see Section 188.8.131.52 earlier in this chapter) and, as no local component with the identification or filter that fits the hostname update.jabber.org was found in the configuration, a request to resolve and route to this hostname is passed on to the dnsrv:
# The component receives a request for update.jabber.org. Resolution # is attempted for the first <resend/> tag in the list; # this is the one that implies an SRV lookup by specifying the # _jabber._tcp service definition. The SRV lookup fails--there # are no SRV records maintained for the Jabber service for # update.jabber.org. Resolution is attempted for the second (and # last) <resend/> tag in the list. No service is # specified in this tag, so a normal resolution lookup is made. The # lookup is successful and returns an IP address. The success means that # we've got a match, and the packet destined for update.jabber.org # can be passed on to the component specified as next in the chain, # which is s2s, as specified in the tag:
Component Connection Method
The dnsrv component is loaded as a shared library with the library load method:
The dnsrv() function is called when loading is complete to initialize the service.
Component Instance: conf
The Conferencing component is a service that provides group chat facilities in a Jabber environment. Rooms can be created and people can join and chat, similarly to the way they do in IRC (Internet Relay Chat) channels. The component instance described by the configuration XML in Example 4-17 is shown in diagram form in Figure 4-13.
jabber.xml configuration for the conf component instance
<load> <conference>./conference-0.4.1/conference.so</conference> </load>
<conference xmlns="jabber:config:conference"> <public/> <vCard> <FN>yak Chatrooms</FN> <DESC>This service is for public chatrooms.</DESC> <URL>http://yak/chat</URL> </vCard> <history>20</history> <notice> <join> has become available</join> <leave> has left</leave> <rename> is now known as </rename> </notice> <room jid="email@example.com"> <name>The Kitchen</name> <notice> <join> has entered the cooking melee</join> <leave> can't stand the heat</leave> <rename> now answers to </rename> </notice> </room> </conference>
The component acts as a sort of third party, and all interaction between room participants is through this third party. This makes it possible to support privacy (such as using nicknames to hide users' real JIDs) and other features.
Component Type and Identification
The Conferencing component is identified to the backbone as a service:
and is given the identity conf, with which the component instance will register itself when loaded.
By convention, the Conferencing component often is addressed (by clients) as conference.<hostname>; we see that convention has been followed in that the host filter for this instance is:
which means that all packets destined to any JID at the hostname conference.yak will be sent to the conf component instance. This matches the identification of this service in the <browse/> section of the JSM custom configuration.
Configuration of the Conferencing component is straightforward and is identified with the jabber:config:conference namespace:
|<conference xmlns="jabber:config:conference"> ...
We can see from the contents of the custom configuration that there are a number of elements:
- Public or private service
- Specifying (with the <public/> tag) that a conference
- service is public means that users are allowed to browse the elements
- that the service is controlling, namely, the rooms. Rooms are either
- precreated or created on the fly by the first user to specify a new
- name when requesting to join a room. Specifying (with the
- <private/> tag) that a conference service is private
- means that users are allowed to browse only rooms that they already
- know about, meaning rooms in which they're already present.
- The conference service component can have its own vCard information,
- which can be requested at any time. Here is where that vCard
- information can be maintained. Like the vCard for the JSM service,
- this particular definition uses only a few of the many possible vCard
- fields, for example: :
|<vCard> <FN>yak Chatrooms</FN> <DESC>This
service is for public chatrooms.</DESC>
- Message history
- When you join a room, it is sometimes useful to see some of the most
- recent messages from the room's conversation(s). The
- <history/> tag allows you to specify how many previous
- messages are relayed to new room joiners. : If you don't specify a
- <history/> tag, a default value of 10 will be
- Action notices
- Three main "events" can happen with users and rooms: a user enters a
- room; a user leaves a room; a user changes her nickname. When any of
- these events occurs, the conference service component sends some text
- to the room to notify the participants. You can modify the text that
- gets sent with the tags in the <notice/> configuration
- element. :
|<notice> <join> has become available</join>
<leave> has left</leave> <rename> is now known as
- A room is normally created when a user requests to join a room that
- doesn't already exist, and the requesting user is determined to be the
- room's owner. Or a room may be precreated by using the
- <room/> tag when the service starts up. Each room has a
- JID. Optionally, you can give the room a name (which may be displayed
- by clients as the room's title) and its own action notices: :
|<room jid="firstname.lastname@example.org"> <name>The
Kitchen</name> <notice> <join> has entered the cooking
melee</join> <leave> can't stand the heat</leave>
<rename> now answers to </rename> </notice>
- The other settings that can be specified within a
- <room/> tag are shown in Table 4-5.
|<nick/>||This signifies that a nickname is required||for entry to the room. If one is not specified (in the join request),||a nickname will be constructed for the user dynamically, usually the||JID with a numeric suffix to make it unique in the room.|
|<secret/>||Rooms can be protected from unauthorized||entry by specifying a secret that will be required on entry.|
|<privacy/>||This tag signifies that a privacy mode is||supported, which means that users' real JIDs will be hidden from||browsing requests.|
Component Connection Method
The Conferencing component is compiled into a shared object library (./conference-0.4.1/conference.so) and is connected to the Jabber backbone with the library load method:
Once loaded, the function conference() is called to initialize the component and perform setup tasks, such as creating rooms specified in the configuration.
Component Instance: jud
As mentioned already, the JUD is a user directory service that provides storage and query facilities for the user's name and address data. The jud component instance described by the configuration XML in Example 4-18 is shown in diagram form in Figure 4-14.
The JUD that is defined here relies upon the xdb component for data storage and retrieval services, which in turn means that, in this case, the data is stored in XML format in a file under the directory defined in the <spool/> tag in the xdb component instance's definition. All the data managed by the JUD is stored in one lump, with no specific JID associated with it; this means that xdb's engine, xdb_file, will store it as a single file called global.xml under the directory named after the JUD hostname jud.
jabber.xml configuration for the jud component instance
<load> <jud>./jud-0.4/jud.so</jud> </load>
<jud xmlns="jabber:config:jud"> <vCard> <FN>JUD on yak</FN> <DESC>yak User Directory Services</DESC> <URL>http://yak/</URL> </vCard> </jud>
Component Type and Identification
JUD is clearly a service component and is identified as such with this tag:
The name given to the component instance is jud.
Requests of the JUD, such as searches or registrations (a user "registers" with the JUD and thereby causes his name and address details to be stored by the JUD), must be directed specifically at the JUD, which we have identified in the <browse/> area of our JSM configuration (see the next section) as jud.yak.
As we have identified the JUD in this way, requests will reach the JUD by way of this hostname, which is therefore what we want to filter on:
Requests to any other hostnames are not appropriate for the JUD to handle and will therefore be filtered out.
There is not much to configure in the JUD; it is a simple user directory service, and many of the features are currently hardcoded: where the data is stored, what data fields are stored per JID, and so on. The only configuration we can maintain is the JUD's vCard information. Just as the Jabber server itself and each user can have a vCard, components can have vCards too. These component vCards can be requested in the same way. (The vCard in this case is actually for the JSM, which is the heart of the Jabber server.)
|<jud xmlns="jabber:config:jud"> <vCard> <FN>JUD
on yak</FN> <DESC>yak User Directory Services</DESC>
<URL>http://yak/jud</URL> </vCard> </jud>
The namespace that declares the JUD configuration is jabber:config:jud.
Component Connection Method
The JUD defined here is implemented as a set of C programs compiled into a shared object (./jud-0.4/jud.so) library. It is connected to the backbone with the library load connection method:
|<load> <jud>./jud-0.4/jud.so</jud> </load>
and the function jud() is called to initialize the component.
Component Instance: s2s
Just as the c2s component provides the Client (to Server) Connections service, so the s2s component provides the Server (to Server) Connections service. The XML configuration that describes the s2s component is shown in Example 4-19 and is represented in diagram form in Figure 4-15.
jabber.xml configuration for the s2s component instance
<load> <dialback>./dialback/dialback.so</dialback> </load>
<dialback xmlns='jabber:config:dialback'> <legacy/> <ip port="5269"/> <karma> <init>50</init> <max>50</max> <inc>4</inc> <dec>1</dec> <penalty>-5</penalty> <restore>50</restore> </karma> </dialback>
Component Type and Identification
The component type is service, and the instance here is identified as s2s:
Like the c2s component instance definition, no explicit host filter is set for s2s. The identification of the component instance as s2s acts as a backup host filter.
The configuration for the s2s is similar to that of the c2s; after all, it is about managing connections to other hosts. The configuration namespace is, however, a little odd:
Dialback? Well, in order to prevent spoofing on a connecting server's part, the s2s component implements an identity verification mechanism that is used to check that a connecting server is who it says it is. See Dialback for more details.
There are three immediate child tags in the configuration wrapper tag:
- This acts as a flag that allows "legacy" Jabber servers to connect (or
- disallows, if it is absent). A legacy Jabber server is one that has
- Version 1.0 and, of more relevance, no support for the dialback
- mechanism. Without the tag, an incoming connection from a Version 1.0
- Jabber server that didn't support dialback would be refused.
- While a normal Jabber server listens for client connections on 5222,
- it listens for connections from other Jabber servers on port 5269.
- This is specified with the <ip/> tag, which has the
- same characteristics as the <ip/> tag in the
- c2s configuration settings (more than one tag allowed,
- specific IP address optional).
- Karma is used in the s2s component to control connection
- traffic, just as it is used in c2s. See Section 4.13 later in
- this chapter for more details.
Component Connection Method
The library load method is used to connect the s2s component to the backbone:
The dialback() is called in the shared library after it has been loaded.
The io Section
The <io/> section of the jabber.xml configuration file, shown in Example 4-20 and represented in diagram form in Figure 4-17, is where a number of settings relating to socket communication with the Jabber server are set.
jabber.xml configuration for the io section
<karma> <heartbeat>2</heartbeat> <init>64</init> <max>64</max> <inc>6</inc> <dec>1</dec> <penalty>-3</penalty> <restore>64</restore> </karma>
<rate points="5" time="25"/>
Although a distinct section,
io does not describe a component with custom configuration or a connection method; the contents are merely settings. Let's examine each of these settings here.
The <rate/> Tag
The <rate/> tag affords us a sort of connection throttle by allowing us to monitor the rate at which incoming connections are made and to put a hold on further connections if the rate is reached.
The rate is calculated to be a number of connection attempts--from a single IP address--within a certain amount of time. We can see these two components of the rate formula as attributes of the <rate/> tag itself:
| <rate points="5" time="25"/>
This means acceptance of incoming connections from an individual IP address will be stopped if more than five connection attempts (points) are made in the space of 25 seconds (time).
The "rating" (the throttling of connection attempts) will be restored at the end of the period defined (25 seconds in this case).
The effect of a <rate/> tag in this io section is serverwide; all socket connections (for example, those of c2s and s2s) can be rate-limited. If there is no explicit <rate/> specification in a particular service that listens on a socket for connections, then the specification in this io section is used. If no <rate/> tag is specified in this io section, then the server defaults are used--these are actually the same as what's explicitly specified here.
The <karma/> Tag
Like the <rate/> tag, <karma/> is used to control connectivity. Whereas rating helps control the number of connections, karma allows us to control the data flow rate per connection once a connection has been made.
The concept of karma is straightforward; each socket has a karma value associated with it. We can understand it better if we think of it as each entity (connecting through a socket) having a karma value. The higher the value--the more karma--an entity has, the more data it is allowed to send through the socket. So as rating is a throttle for connections, so karma is a throttle for data throughput.
There are certain settings that allow us to fine-tune our throughput throttle. Table 4-6 lists these settings, along with the values explicitly set in each of the c2s and s2s component sections in our jabber.xml file. Notice how the settings for the Server (to Server) Connections component are considerably higher than those for the Client (to Server) Connections--this is based on the assumption that server-to-server traffic will be greater than client-to-server on a socket-by-socket basis.
|Setting||c2s values||s2s values||Description|
|<init/>||10||50||The initial value for karma on a new socket.|
|<max/>||10||50||The maximum karma value that can be attained by a socket.|
|<inc/>||1||4||By how much the karma value is incremented (over time).|
|<dec/>||1||1||By how much the karma||value is decremented in a penalty situation.|
|<penalty/>||-6||-5||The karma value is plunged to this level once it falls to 0.|
|<restore/>||10||50||The karma value is boosted to this level once it rises (after a penalty) to 0.|
The relationship between an entity's karma and how much data it is allowed to write to the socket is linear; in fact, the amount is:
| (karma value * 100)
and this every 2 seconds. The multiplier (100) and the karma period (2) are hardcoded into the server; a recompilation would be required to change these values.
Over time, an entity's karma value will increase, up to a maximum value (we need a ceiling on how much we're going to allow an entity to send!) every karma period (2 seconds).
The same karma formula is used to penalize an entity for sending too much data. If more than (karma * 100) bytes are sent within a certain period, the entity's karma value is decreased. Once the value reaches 0, it is plunged to a negative number, meaning that the entity must take a breather until the value grows back to 0 (over time, it will). At this point, the value will be restored to a value that gives the entity a chance to start sending data again.
===The <ssl/> Tag ===If you have compiled your Jabber server with SSL (see Chapter 3) and want to use SSL-encrypted connections, you will have to have specified the <ssl/> tags in the configuration of the c2s component instance. Furthermore, you must specify the location of your SSL certificate and key file. There is an <ssl/> tag in this io section for this purpose.
You can have separate files for each IP address specified in the c2s component instance configuration's <ssl/> tag. Example 4-21 shows the specification of two .pem files--one for each of two IP addresses.
Specifying SSL certificate and key files per IP address
| <ssl> <key
The <allow/> and <deny/> Tags
You can control at the IP address and network level who can connect to your Jabber server with the <allow/> and <deny/> tags.
The default (when no tags are specified) is to allow connections from everywhere. If you use <allow/> tags, then connections will be allowed only from the addresses or networks specified. If you use <deny/> tags, then connections will be denied from those addresses or networks specified. If you have both <allow/> and <deny/> tags, the intersection of addresses between the two tag sets will be denied. In other words, <deny/> overrides <allow/>.
The tags wrap individual IP addresses, which are specified using the <ip/> tag, or network addresses, which are specified using the <ip/> tag in combination with the <mask/> netmask tag. Example 4-22 shows connections to a Jabber server being limited to hosts from two internal networks with the exception of one particular IP address, and a specific host on the Internet.
Using <allow/> and <deny/> to control connections
<allow> <ip>192.168.11.0</ip> <mask>255.255.255.0</mask> </allow>
<allow> <ip>184.108.40.206</ip> </allow>
<deny> <ip>192.168.11.131</ip> </deny>
The pidfile section simply specifies the name of the file to which the process ID (PID) of the Jabber server will be written at startup. In this case, the name of the file is ./jabber.pid.
The XML describing this section is shown in Example 4-23 and is represented in diagram form in Figure 4-18.
jabber.xml configuration for the pidfile section
Managing the Configuration
Now that we've had a tour of the components and have an idea what sorts of configurations are possible, you may be wondering whether there's a way to retain some sort of overview of the actual XML. Dropping component instance definitions in and out of the configuration file is somewhat tedious, and certainly when editing such a large file, it's not difficult to lose sense of direction and comment out or edit the wrong section.
Help is at hand, in the form of the <jabberd:include/>
tag. This tag comes from the same stable as
<jabberd:cmdline/> and provides the Jabber server administrator with ways to better manage the XML configuration.
The contents of a file specified with the <jabberd:include/> tag are imported (included) in the position that the <jabberd:include/> tag occupies. Depending on what the root tag in the file to be included is, the import is done in one of two ways:
- If the root tag matches the parent tag of
<jabberd:include/>, the contents of the file minus the root tag are included. * If the root tag does not match the parent tag of <jabberd:include/>, then the entire contents of the file are included. For example, if we have a section like this in the jabber.xml file:
|... <conference xmlns="jabber:config:conference">
<public/> <vCard> <FN>yak Chatrooms</FN>
<DESC>This is a public chatroom service.</DESC>
<URL>http://yak/chat</URL> </vCard> ...
and the content of ./rooms.xml looks like this:
|<room jid="email@example.com"> <name>The
Kitchen</name> <notice> <join> comes to add to the
broth-spoiling</join> <leave> can't stand the
heat</leave> <rename> is now known as </rename>
</notice> </room> <room jid="firstname.lastname@example.org">
then these rooms will be defined to the Conferencing component as if the configuration XML had appeared directly inside of the <conference/> configuration wrapper tag.
We can put the <jabberd:include/> tag to good use and organize our configuration component instances as shown in Example 4-24.
Configuration XML organized with <jabberd:include/>
<!-- Core components -->
<jabberd:include>./sessions.xml</jabberd:include> <jabberd:include>./config/standard/xdb.xml</jabberd:include& gt; <jabberd:include>./config/standard/c2s.xml</jabberd:include& gt;
<!-- Testing -->
<!-- <jabberd:include>./config/local/conference.xml</jabberd: include> <jabberd:include>./config/test/test.service.xml</jabberd: include> -->
<!-- Logging -->
<jabberd:include>./config/standard/elogger.xml</jabberd: include> <jabberd:include>./config/standard/rlogger.xml</jabberd: include>
<!-- Internal-only server right now
<jabberd:include>./config/standard/dnsrv.xml</jabberd:include > <jabberd:include>./config/standard/s2s.xml</jabberd:include& gt; -->
<!-- Misc -->
<!-- IO (incl. karma), PIDfile -->
<jabberd:include>./config/standard/io.xml</jabberd:include> ; <jabberd:include>./config/standard/pidfile.xml</jabberd: include>
The XML in Example 4-24 gives us a great overview of which components are included in our Jabber server; we have the core components providing the Session Management, Client (to Server) Connections, and Data Storage services. There are a couple of components under test (Conferencing and a custom component we're calling test.service) that are currently deactivated. There are also Logging services in their standard configuration. The components providing facilities for connecting to other Jabber servers--Server (to Server) Connections and Hostname Resolution--are currently inactive, meaning that as configured, the Jabber server will be purely internal. There's a local JUD defined too; and finally we have the io and pidfile specifications--also abstracted out into separate XML chunks.
This works well especially if there are certain parts of the configuration--for example, certain component instance definitions--that don't normally change; you can see that many of the component configuration files are in a "standard" directory, which by convention could signify that they're the same as the XML configuration as delivered and are not likely to change.
The <jabberd:cmdline/> Tag
The <jabberd:cmdline/> tag was mentioned in Chapter 3 as a way of providing a command-line hook into the configuration: values stored in the XML could be overridden by command-line switches used when invoking jabberd.
The tag is used in the standard XML configuration (see Figure 4-3) to allow replacement of the hostname and spool directory:
In fact, this tag can be used in most places in the XML. So if, for example, you have a requirement to modify (respecify) the error and record log files for each jabberd invocation, you can do something like this:
|<log id='elogger'> <host/> <logtype/>
<format>%d: [%t] (%h): %s</format>
and then override the value error.log with something else at invocation time:
| yak:~/jabber-1.4.1$ ./jabberd/jabberd -e error_log.txt
Throughout the discussion of components in this chapter and how they're arranged to form a "complete" Jabber server, we've really considered only a monolithic server, running in a single process. However, there may be good reasons (performance, administration, and manageability) to run the Jabber server in different configurations, or constellations.
In this concluding section of this chapter, we take a look at some of the possible constellations and how they're constructed.
Multiple Servers on One Host
Although it's unlikely that this constellation would be of much use, it is possible to run more than one Jabber server on one host simply by creating multiple installations, maintaining each server's jabber.xml configuration file separately, and starting them up to listen to each other on different ports. Note that some Jabber clients don't support connections to anything other than port 5222, however.
As we have seen from examining the instance configuration for the Client (to Server) Connections and the Server (to Server) Connections components, the standard Jabber ports for client and server connectivity are 5222 and 5269, respectively. To run a second Jabber server on the same host, just ensure that its Connections component instances are configured to listen on different ports.
"Real" Virtual Jabber Servers
While looking at Section 4.4.2 earlier in this chapter, we saw how to use multiple <host/> tags to allow connection to the Jabber server under multiple hostnames. Although this simple feature might be useful in some circumstances, a better distinction of Session Management functionality might be more appropriate.
Taking our a-domain.com and b-domain.com hostname examples again, we might want to offer different welcome messages to new users and limit the authentication possibilities for the b-domain.com host to zero knowledge only. We also may wish to disable the message filtering service for the a-domain.com host. Furthermore, we might want to offer--in the <browse/> list--a different set of services for each of the hosts.
Let's have a look how this can be done. Using the <jabberd:include/> tag to organize our configuration XML by component instance definitions, we might have a jabber.xml configuration file that looks like Example 4-25.
Virtual server jabber.xml configuration
<!-- Common components --> <jabberd:include>./config/common/xdb.xml</jabberd:include> <jabberd:include>./config/common/c2s.xml</jabberd:include> <jabberd:include>./config/common/elogger.xml</jabberd:include > <jabberd:include>./config/common/rlogger.xml</jabberd:include > <jabberd:include>./config/common/dnsrv.xml</jabberd:include& gt; <jabberd:include>./config/common/s2s.xml</jabberd:include>
<!-- a-domain.com --> <jabberd:include>./config/a-domain/sessions.xml</jabberd: include> <jabberd:include>./config/a-domain/conference.xml</jabberd: include>
<!-- b-domain.com --> <jabberd:include>./config/b-domain/sessions.xml</jabberd: include> <jabberd:include>./config/b-domain/conference.xml</jabberd: include> <jabberd:include>./config/b-domain/jud.xml</jabberd:include& gt;
<!-- IO, PIDfile --> <jabberd:include>./config/common/io.xml</jabberd:include> <jabberd:include>./config/common/pidfile.xml</jabberd:include >
What can we see here? First, a-domain.com and b-domain.com Jabber users will share the common facilities such as data storage (remembering that data will be stored by hostname within the spool area), Client (to Server) Connections, Logging, and so on.
They also share the same io settings and pidfile definition; after all, there is still only one Jabber server that is hosting these two virtual servers, so we need only one pidfile.
But we also see that there are two sessions.xml files included--one for the a-domain.com host and another for the b-domain.com host. And with each of the sessions.xml files included, we have one or two other components--for Conferencing and JUD services.
Configuration for a-domain.com
The layout in the jabber.xml file indicates that there are separate definitions for each of the two hosts. Let's examine the contents of ./config/a-domain/sessions.xml:
<!-- No filter config necessary -->
<vCard> <FN>a-domain.com Jabber Services</FN> <DESC>Jabber 1.4.1 on a-domain.com</DESC> <URL>http://www.a-domain.com</URL> </vCard>
<browse> <conference type="public" jid="conference.a-domain.com" name="a-domain Conferencing"/> </browse>
a-domain.com not open for self-service new user accounts
<register notify="yes"> <instructions/> <name/> <email/> </register>
<welcome> <subject>Welcome!</subject> <body>Welcome to the Jabber server at a-domain.com</body> </welcome>
<admin> <write>email@example.com</write> <reply> <subject>Auto Reply</subject> <body>This is a special administrative address.</body> </reply> </admin>
<load main="jsm"> <jsm>./jsm/jsm.so</jsm> <mod_echo>./jsm/jsm.so</mod_echo> <mod_roster>./jsm/jsm.so</mod_roster> <mod_time>./jsm/jsm.so</mod_time> <mod_vcard>./jsm/jsm.so</mod_vcard> <mod_last>./jsm/jsm.so</mod_last> <mod_version>./jsm/jsm.so</mod_version> <mod_announce>./jsm/jsm.so</mod_announce> <mod_agents>./jsm/jsm.so</mod_agents> <mod_browse>./jsm/jsm.so</mod_browse> <mod_admin>./jsm/jsm.so</mod_admin>
<!-- No filter service for a-domain.com <mod_filter>./jsm/jsm.so</mod_filter> -->
<mod_offline>./jsm/jsm.so</mod_offline> <mod_presence>./jsm/jsm.so</mod_presence> <mod_auth_plain>./jsm/jsm.so</mod_auth_plain> <mod_auth_digest>./jsm/jsm.so</mod_auth_digest> <mod_auth_0k>./jsm/jsm.so</mod_auth_0k> <mod_log>./jsm/jsm.so</mod_log> <mod_register>./jsm/jsm.so</mod_register> <mod_xml>./jsm/jsm.so</mod_xml> </load>
We can see that this configuration file contains the definition of a JSM component instance. The instance is identified with the name sessions.a-domain and the host a-domain.com has been registered as what the JSM listens for--its "external identification."
We can also see that:
- The literal texts in the descriptions and in the welcome message are specific to a-domain.com.
- The administration section in the configuration describes a local user that is at a-domain.com as the administrator.
- The new user registration facility has been disabled.
- The mod_filter service has been commented out from the list of loaded modules in the component connection definition.
There is one service listed in the browse section--the Conferencing service, with the JID conference.a-domain.com; this is the service that's defined in the file ./config/a-domain/conference.xml, which itself is specified in a <jabberd:include/> tag in the main jabber.xml alongside this sessions.xml file.
Taking a look at this Conferencing service definition for a-domain.com in the ./config/a-domain/conference.xml file, we see:
xmlns="jabber:config:conference"> <public/> <vCard>
<FN>a-domain Chatrooms</FN> <DESC>This service is for
<history>10</history> <notice> <join> is
here</join> <leave> has left</leave> <rename> is
now known as </rename> </notice> <room
jid="firstname.lastname@example.org"> <name>The Bar</name>
</room> </conference> </service>
Similar to what we saw with the ./config/a-domain/sessions.xml content, here we see a-domain.com-specific definitions: crucially the service identification as conf.a-domain and the <host/> tag declaring the hostname that this service serves under.
Configuration for b-domain.com
Now that we've seen the a-domain-specific XML, let's have a look at the b-domain-specific XML:
<vCard> <FN>b-domain Jabber Server</FN> <DESC>Jabber 1.4.1 on b-domain.com</DESC> <URL>http://www.b-domain.com/</URL> </vCard>
<browse> <conference type="public" jid="conference.b-domain" name="b-domain Conferencing"/> <service type="jud" jid="jud.b-domain" name="b-domain JUD"> <ns>jabber:iq:search</ns> <ns>jabber:iq:register</ns> </service> </browse>
<register notify="yes"> <instructions> Choose a username and password to register with this server. </instructions> <name/> </register>
<welcome> <subject>Welcome!</subject> <body>Welcome to the Jabber server at b-domain</body> </welcome>
<admin> <read>info@b-domain</read> <write>service@b-domain</write> <reply> <subject>Auto Reply</subject> <body>This is a special administrative address.</body> </reply> </admin>
<load main="jsm"> <jsm>./jsm/jsm.so</jsm> <mod_echo>./jsm/jsm.so</mod_echo> <mod_roster>./jsm/jsm.so</mod_roster> <mod_time>./jsm/jsm.so</mod_time> <mod_vcard>./jsm/jsm.so</mod_vcard> <mod_last>./jsm/jsm.so</mod_last> <mod_version>./jsm/jsm.so</mod_version> <mod_announce>./jsm/jsm.so</mod_announce> <mod_agents>./jsm/jsm.so</mod_agents> <mod_browse>./jsm/jsm.so</mod_browse> <mod_admin>./jsm/jsm.so</mod_admin> <mod_filter>./jsm/jsm.so</mod_filter> <mod_offline>./jsm/jsm.so</mod_offline> <mod_presence>./jsm/jsm.so</mod_presence> <!--
zero-knowledge authentication only
<mod_auth_plain>./jsm/jsm.so</mod_auth_plain> <mod_auth_digest>./jsm/jsm.so</mod_auth_digest> -->
<mod_log>./jsm/jsm.so</mod_log> <mod_register>./jsm/jsm.so</mod_register> <mod_xml>./jsm/jsm.so</mod_xml> </load>
We can see we've fulfilled our requirements of the virtual server for b-domain: registration is open but authentication is limited to zero knowledge, and the services offered in the <browse/> list are unique to b-domain. That said, it is possible for someone registered to b-domain to connect to and use, say, the Conferencing service listening on conferencing.a-domain.com; see Section 4.16.4 later in this chapter.
The Conferencing and JUD services associated with the b-domain.com hostname will be configured in a similar way to how the Conferencing service was configured in ./config/a-domain/conference.xml for a-domain--crucially again the service IDs will be unique and the <host/> tags will be specific to b-domain.com.
As long as each component instance is uniquely identified and you have used separate hostname definitions, "real" virtual Jabber servers all listening to the same Jabber standard client port of 5222 on a single host can be a reality.
Splitting Up Jabber Server Processes
As well as being able to lump multiple Jabber server identities in the form of virtual hosting onto a single Jabber server and its corresponding monolithic process, you may also go in the opposite direction and split up a single Jabber server into multiple processes. These processes interact through TCP socket connections and so it's possible for them to run on the same or different physical hosts.
How is this achieved? Well, revisiting the ideas from the start of this chapter, we consider that a Jabber server is a daemon (jabberd) and a set of components that provide the services. Taking one step away from the "classic" Jabber server model, which contains components such as the ones described in Section 4.1 at the start of this chapter, we can imagine a Jabber server where jabberd controls just one component, say the Conferencing component.
How much use is a Jabber server with a single Conferencing component? Not much. But when linked together with another Jabber server, we can see that this is a way to split off components and run them independently.
Taking the Conferencing component as an example candidate for ostracism, let's have a look at what we need to do.
Define the configuration for the satellite server
This is very straightforward. We've seen Conferencing configuration before, so we'll shorten it a bit here:
<service id='conf.yak'> <host>conference.yak</host> <load><conference>./conference-0.4.1/conference.so</ conference></load> <conference xmlns="jabber:config:conference">
This is the entirety of the configuration file so far for the satellite server--there's only one component instance--identified as conf.yak. Notice that the only other tag pair is the filewide <jabber> ... </jabber>. Let's call it jabber_conf.xml.
Open a connection point in the main server
We've already seen a mechanism earlier in this chapter in Section 4.1.3 that allows external components to connect into the Jabber server backbone by exchanging XML streams in the jabber:component:accept namespace. This is the TCP socket connection method.
We can prepare a connection point to the main Jabber server by specifying a component connection like this:
<secret>confsecret</secret> </accept> </service>
in the configuration for the main Jabber server.
There's no real difference between this XML and the XML shown in the <accept/> example earlier in this chapter. The clue lies in the service ID, which has been defined as conflinker. There's nothing special about the name; it simply gives the administrator a hint that there's some sort of link to a conference service from this point.
We're specifying acceptance of connections on IP address 127.0.0.1 (the same host as this main server), but it could just as easily be the IP address assigned to a network card, so that the connection could be made from a satellite server on a separate host.
List the service definition in <browse/>
While we're editing the main server's XML, we should add an entry for our satellite conference service:
|<browse> ... <conference type="public"
jid="conference.yak" name="yak Conferencing"/> ...
The JID defined here must match the host defined in the Conferencing component instance definition in the satellite server configuration.
Add a connector mechanism to the satellite server
Now that we've opened up a connection point in the main server, we need to add some corresponding configuration to the satellite server's XML--the "plug" that will attach to the connection point on the main server:
|<jabber> <service id="conflinker">
<uplink/> <connect> <ip>127.0.0.1</ip>
</connect> </service> <service id='conf.yak'>
xmlns="jabber:config:conference"> ... </conference>
This new service (the "plug") with an ID of conflinker (which matches the ID of the corresponding "socket" in the main server) contains two elements.
- The <connect/> tag, which corresponds to the
<accept/> tag in the main server's configuration. * The <uplink/> tag, which serves as a conduit for all types of packets--those handled by each of the three delivery trees log, xdb, and service. While we're looking at the satellite server's configuration again, it's worth pointing out that even in a situation in which the satellite server process would be running on a separate host (we're running it here on the same host--hence the localhost IP address of 127.0.0.1 in the <accept/> tag), the value of the conference service's host filter is still conference.yak. In other words, the name of the host where the satellite server actually runs is irrelevant. This is because the conference service is still seen by the main Jabber server's jabberd as "local," through the accept/connect binding and the shared logical name yak is more appropriate.
Specify a different PID file location
If the satellite server is going to be running on the same host as the main server, and from the same directory (indeed, in this example, we've named the satellite server's configuration file jabber_conf.xml to distinguish it from the main server's jabber.xml file), make sure a different location for storing the PID file is specified:
|<jabber> <service id="conflinker"> ...
</service> <service id='conf.yak'> ... </service>
Starting the main server
Once everything is configured, start up the main server:
| yak:~/jabber-1.4.1$ ./jabberd/jabberd -c jabber.xml
The <accept/> section should start listening on port 9001 for a connection:
| yak:~/jabber-1.4.1$ netstat -an | grep 9001 tcp
0 0 127.0.0.1:9001 0.0.0.0:* LISTEN
Starting the satellite server
It's time to start up the satellite server, from the same directory in this example:
| yak:~/jabber-1.4.1$ ./jabberd/jabberd -c jabber_conf.xml
The satellite server should make a connection to the socket listening on 127.0.0.1:9001.
At this stage, you should have Jabber server services split between a main process and a separate process that runs a Conferencing component.
At the risk of stating the obvious, it is worth pointing out that this example shows that simply starting jabberd does not mean that any process will bind to and start listening on port 5222. It is the c2s component that makes this happen. So starting a second jabberd on the same host did not cause any socket listening problems because this second jabberd doesn't have a c2s component (because there's no JSM for clients to want to connect to) and so doesn't try to bind to port 5222.
Using Services on Other Jabber Servers
This section describes a technique that we've already seen used implicitly in Section 4.16.3. That is the use of services on other Jabber servers. In reality, the example of running a Conferencing module in a satellite Jabber server showed the technique in the context of local administrative control; we control the main and satellite servers, and the module in the satellite server may rely on services in the main server for support.
Consider the <browse/> section in the jabber.xml configuration file that comes with Jabber server 1.4.1:
|<browse> ... <service type="jud" jid="users.jabber.org"
name="Jabber User Directory"> <ns>jabber:iq:search</ns>
<ns>jabber:iq:register</ns> </service> ...
What's this? A JID of users.jabber.org? How many Jabber server installations will be running with the jabber.org domain name? Yes, just one. This means that the <browse/> section is pointing to a JUD component running at jabber.org as users.jabber.org. If the Jabber server is running the Server (to Server) Connections and Hostname Resolution components, clients connecting to our server can transparently jump across the wire and avail themselves of the JUD services at users.jabber.org.
The entry doesn't have to be in the <browse/> section. This is more for convenience, so that the clients can build a dynamic list of services from which the user may choose. The client may of course offer a facility for the user to directly enter the name (hostname, address) of the service she requires.
How does this procedure compare to the "satellite server" procedure? In this case, the packets that originate from a Jabber client connected to our Jabber server make their way to the JUD service on users.jabber.org by means of the Server (to Server) service. That is, they travel through a connection described by the jabber:server namespace. On the other hand, packets on our server destined for a satellite conference service travel through a connection described by the jabber:component:accept namespace.
</content></chapter></content></part><part><title>Putting Jabber's Concepts to Work</title><content>