473,609 Members | 1,871 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Compilers - 5B: Parsers

11,448 Recognized Expert MVP
Greetings,

welcome back at the sequel of the parsers article chapter. This part is dedicated
to the ExpressionParse r, the largest parser class for our little language.
This class parses a complete expression and just like the other parser classes
calls the Generator on the fly. When the parse has completed successfully, the
generated code is returned. Otherwise the parse is aborted and an exception is
thrown telling the reason of the abort.

The ExpressionParse r

The obligatory preamble of the ExpressionParse r class is boring again:

Expand|Select|Wrap|Line Numbers
  1. public class ExpressionParser extends AbstractParser {
  2.  
  3.     public ExpressionParser(Tokenizer tz, User user) { super(tz, user); }
  4.  
Just like the Parser and DefinitionParse r classes a Tokenizer and User object
are passed to the constructor of this class. Here's the main entry point for
this object:

Expand|Select|Wrap|Line Numbers
  1. public Code parse(Generator gen) throws InterpreterException {
  2.  
  3.     binaryExpression(gen, 0);
  4.  
  5.     return gen.getCode();
  6. }
  7.  
You've seen this first method being used already: the DefinitionParse r calls
it when it needs an entire expression to be parsed for the user defined function
body. The method calls another private method that does the job; it'll come a
bit later (see below). The method returns the compiled code that has been
generated on the fly while parsing the expression.

Binary expressions

Here are the grammar rules again for a binary expression:

Expand|Select|Wrap|Line Numbers
  1. expression0: expression1 ( operator0 expression1 )*
  2. expression1: expression2 ( operator1 expression2 )*
  3. expression2: expression3 ( operator2 expression3 )*
  4. expression3: expression4 ( operator3 expression4 )*
  5. expression4: expression5 ( operator4 expression5 )*
  6. expression4: unary       ( operator5 unary       )*
  7. operaor 0: ':'
  8. operator1: '==' | '!='
  9. operator2: '<=' | '<' | '>' | '>='
  10. operator3: '+' | '-'
  11. operator4: '*' | '/'
  12. operator5: '^'
  13.  
Note the numbers 0, 1, 2, 3, 4 and 5; we're going to use these numbers as index
numbers for the binary expression. In general rule 'i' looks like this:

Expand|Select|Wrap|Line Numbers
  1. expression(i): expression(i+1) ( operator(i) expression(i+1) )*
  2.  
... except when 'i' == 5 where the 'unary' rule is used instead. It translates
to Java code like this:

Expand|Select|Wrap|Line Numbers
  1. private void binaryExpression(Generator gen, int i) 
  2.                         throws InterpreterException {
  3.  
  4.     Token token;
  5.  
  6.     if (i == ParserTable.binops.size())
  7.         unaryExpression(gen);
  8.     else {
  9.         for (binaryExpression(gen, i+1); 
  10.             ParserTable.binops.get(i).
  11.                 contains((token= tz.getToken()).getStr()); ) {
  12.             tz.skip();
  13.             binaryExpression(gen, i+1);
  14.             gen.makeFunctionInstruction(token, 2);
  15.         }
  16.     }
  17. }
  18.  
The index value 'i' represents the precedence value and the operators of that
precedence that are concerned in this binary expression. At the highest precedence
level a binary expression is just a unary expression; otherwise at precedence
level 'i' just binary expressions that take higher precedence level operands
are parsed, separated by precedence level 'i' operators.

The 'makeFunctionIn struction' method in the Generator class generates a binary
operator instruction and adds it to the compiled code list. Also see the
explanation in the 'Grammar' part of this article. In a bird eye view this
recursive (!) method parses binary expressions like described in the indexed
grammar rule above. If you understand this method the rest of the ExpressionParse r's
method will be a piece of cake. This method takes care of the binary operator
precedence rules, i.e. recursively it parses expressions that take higher and
higher precedence operators until unary expressions are the only thing that's
left. Unary expressions are a bit of a mess syntactically speaking because there
can be some many forms of them. We'll split things up a bit.

Unary expressions

Here are the grammar rules for unary expressions again:

Expand|Select|Wrap|Line Numbers
  1. unary: ( unaryop unary ) | atomic
  2. unaryop: '+' | '-' | '!' | '++' | '--'
  3.  
Basically a unary expression is prepended by zero or more unary operators
followed by an atomic expression. The operators are applied (or evaluated) from
right to left although they are parsed from left to right. Here's what the
unaryExpression looks like:

Expand|Select|Wrap|Line Numbers
  1. private void unaryExpression(Generator gen) throws InterpreterException {
  2.  
  3.     Token token= tz.getToken();
  4.  
  5.     if (ParserTable.unaops.contains(token.getStr())) {
  6.         tz.skip();
  7.         unaryExpression(gen);
  8.         gen.makeFunctionInstruction(token.mark(), 1);
  9.     }
  10.     else
  11.         atomicExpression(gen);
  12. }
  13.  
Again this is a highly recursive method: as long as a unary operator is present
in the token stream this method calls itself recursively and generated compiled
code for the unary operator afterwards. If no unary expression is present in the
token stream (anymore), the work is delegated to the atomicExpressio n method.

Here's an example: suppose the input stream is '-!x', first the '-' operator
is parsed and the method calls itself again. The input stream now contains '!x'.
The '!' operator is parsed and the method calls itself again. The input stream
now is 'x' which is handled by the atomicExpressio n method. In the second call
of the unaryExpression method code is generated for the '!' operator; the method
returns and in the first call of this method code is generated for the '-'
operator. The code can symbolically be represented as 'x ! -'.

When we can play a bit with the parsers we'll see that compiled code resembles
a postfix representation of the infix expressions to be compiled.

When we've reached this deep in our recursive descent parse there are no more
binary nor unary operators to parse in this part of the expressions. All that's
left are 'atomic' expressions.

Atomic expressions

Atomic expressions are a mess syntactically speaking. Basically an atomic expression
start with a constant, a left parenthesis, a left curly bracket or a name.

The first options are relatively easy; it the 'name' that's the cumbersome part.
All the Tokenizer knows is that it has scanned a name; but a name can be
anything:

1) a built-in function;
2) a user defined function or listfunc;
3) a quoted object (see the first part of this chapter);
4) a simple variable, or
5) an assignment to a variable.

Let's dig into this mess; here's the atomicExpressio n method:

Expand|Select|Wrap|Line Numbers
  1. private void atomicExpression(Generator gen) throws InterpreterException {
  2.  
  3.     Token token= tz.getToken();
  4.  
  5.     switch (type(token)) {
  6.  
  7.         case TokenTable.T_NUMB: constantExpression(token, gen); break;            
  8.         case TokenTable.T_NAME: nameExpression(token, gen); break;
  9.  
  10.         case ParserTable.T_USER: userExpression(token, gen); break;
  11.         case ParserTable.T_FUNC: functionExpression(token, gen); break;
  12.         case ParserTable.T_QUOT: quoteExpression(token, gen); break;
  13.  
  14.         default: 
  15.             if (expect("(")) 
  16.                 nestedExpression(gen);
  17.             else if (expect("{"))
  18.                 listExpression(gen);
  19.             else
  20.                 throw new ParserException(tz, 
  21.                     "expression expected");
  22.     }
  23. }
  24.  
This method tries to predict what to do next given the type of the current token.
It delegates all the work to yet other methods that are supposed to do the work.

Constant expressions

Constant expressions are easy: simply generate the Instruction that handles the
current constant. In our little language the only constants are doubles. Here's
the method that handles a constant:

Expand|Select|Wrap|Line Numbers
  1. private void constantExpression(Token token, Generator gen) {
  2.  
  3.     gen.makeConstantInstruction(token); 
  4.     tz.skip();        
  5. }
  6.  
This method asks the code Generator to produce a ConstantInstruc tion (whatever
that may be) and it considers the current token as processed so it skips it.

Parenthesized expressions

Parenthesized expressions are relatively easy to parse too: a left parenthesis
has already been processed (see the atomicExpressio n method), so just a binary
expression followed by a right parenthesis needs to be parsed:

Expand|Select|Wrap|Line Numbers
  1. private void nestedExpression(Generator gen) throws InterpreterException {
  2.  
  3.     binaryExpression(gen, 0);
  4.     demand(")");
  5. }
  6.  
Observe that this deeply nested method calls one of the outermost methods again:
an entire binary expression with all its precedence levels is parsed again using
just one recursive method call. We've reached the heart of recursive descent
parsing and we will reach it again a couple of times.

For an expression as simple as '(1+1)' the following methods are recursively
called:

- binaryExpressio n(0)
- binaryExpressio n(1)
- binaryExpressio n(2)
- binaryExpressio n(3)
- binaryExpressio n(4)
- unaryExpression
- atomicExpressio n
- nestedExpressio n

and then only that left parenthesis has been recognized and parsed. For that
first '1' we have to do the whole trip again:

- binaryExpressio n(0)
- binaryExpressio n(1)
- binaryExpressio n(2)
- binaryExpressio n(3)
- binaryExpressio n(4)
- unaryExpression
- atomicExpressio n
- constantExpress ion

Then recursion unwinds to the level of the binaryExpressio n(2) that recognizes
the '+' sign in the token stream and the entire circus starts all over again.

There's no need to feel sad about it or to be afraid of it: recursion is fast
nowadays and the depth of stacks is no problem either anymore. Let's go for the
nasty final parts:

Name expressions

When a name is parsed that is not some sort of a function or a quoted object
it can be either just a name or an assignment. Here's how it's parsed:

Expand|Select|Wrap|Line Numbers
  1. private void nameExpression(Token token, Generator gen) 
  2.                         throws InterpreterException {
  3.  
  4.     tz.skip();
  5.     String assign= tz.getToken().getStr();
  6.     if (ParserTable.asgns.contains(assign)) {
  7.         tz.skip();
  8.         gen.preAssignmentInstruction(token, assign);
  9.         binaryExpression(gen, 0);
  10.         gen.makeAssignmentInstruction(token, assign);
  11.     }
  12.     else {
  13.         gen.makeNameInstruction(token);
  14.         if (ParserTable.pstops.contains(assign)) {
  15.             tz.skip();
  16.             gen.makeConstantInstruction(Token.ONE);
  17.             gen.makeAssignmentInstruction(token, assign.charAt(0)+"=");
  18.  
  19.         }
  20.     }
  21. }
  22.  
This method skips the name (it still has the Token as its first parameter) and
checks whether or not the name is followed by an assignment operator. If so,
it skips the token and invokes that topmost binaryExpressio n method again for
the right hand side value of the assignment. Next it lets the code Generator
generate an AssignmentInstr uction for it.

If no assignment operator was present it makes the code Generator generate a
simple NameInstruction for the name that has just been parsed unless a postfix
++ or -- token follows the name: in this case the appropriate expression and
assignment is generated.

As you might see later, our postfix decrement and increment operators behave
identical to C/C++/Java's prefix increment and decrement operators. Confusing,
admitted but it takes less parsing this way for our simple, little programming
language; and we're free to define and implement whatever we want after all ;-)

User function expressions

User function calls are quite simple to parse; here's how it's done:

Expand|Select|Wrap|Line Numbers
  1. private void userExpression(Token token, Generator gen) 
  2.                         throws InterpreterException {
  3.  
  4.     parseArguments(gen, getUserArity(token));
  5.  
  6.     gen.makeInstruction(user.get(token.getStr()));
  7. }
  8.  
This method parses the arguments of the user function call given the arity of
the user function. Then it makes the code Generator create an instruction which
it retrieves from the User object (see the first part of this chapter).

The 'getUserArity' method looks like this:

[code=java]
private int getUserArity(To ken token) throws InterpreterExce ption {

skipDemand("(") ;

return user.get(token. getStr()).getAr ity();
}
[code]

First it demands a left parenthesis after having skipped the name of the user
defined function. Then it consults that User object again to get the arity of
the user defined function.

Parsing the actual arguments of the user defined function is done like this:

Expand|Select|Wrap|Line Numbers
  1. private int parseArguments(Generator gen, int n) throws InterpreterException {
  2.  
  3.     for (int i= 0; i < n; i++) {
  4.         binaryExpression(gen, 0);
  5.         if (i < n-1) demand(",");
  6.     }
  7.     demand(")");
  8.  
  9.     return n;
  10. }
  11.  
The 'n' arguments are parsed; every argument can be an entire binary expression
again. Following every argument expression needs to be a comma except for the
last argument in which case a right parenthesis is expected.

Built-in function expressions

Parsing built-in functions, no matter listfuncs or functions is done in a
similar way as we parsed user defined functions. Have a look:

Expand|Select|Wrap|Line Numbers
  1. private void functionExpression(Token token, Generator gen) 
  2.                         throws InterpreterException {
  3.  
  4.     gen.makeFunctionInstruction(token, 
  5.                     parseArguments(gen, getArity(token)));
  6. }
  7.  
Again the arguments are parsed (see above) and a FunctionIstruct ion is created
by the code Generator. The only difference is that the parse knows already how
many arguments that built-in function needs, i.e. it has to retrieve it from
the ParserTable where the arities for built-in tokens are predefined:

Expand|Select|Wrap|Line Numbers
  1. private int getArity(Token token) throws InterpreterException {
  2.  
  3.     skipDemand("(");
  4.  
  5.     return ParserTable.arity.get(token.getStr());
  6. }
  7.  
Compare this method with the getUserArity method (see above). They're similar,
the only difference is the way they find the arity for the function.

Quoted object expressions

The mysterious quoted objects (think of the 'if' object) look similar to
ordinary function calls but they aren't: the code generated for their arguments
is not being generated by the normal code Generator but another Generator is
used for the purpose and the compiled code for the arguments is passed to the
compiled quoted object. Here's how it's done:

Expand|Select|Wrap|Line Numbers
  1. private void quoteExpression(Token token, Generator gen) 
  2.                         throws InterpreterException {
  3.  
  4.     List<Code> args= new ArrayList<Code>();
  5.  
  6.     for (int i= 0, n= getArity(token); i < n; i++) {
  7.         Generator arg= new Generator();
  8.         binaryExpression(arg, 0);
  9.         if (i < n-1) demand(",");
  10.         args.add(arg.getCode());
  11.     }
  12.     demand(")");
  13.  
  14.     gen.makeQuoteInstruction(token, args);
  15. }
  16.  
For every single argument a new code Generator is used; all the code is collected
in a list of compiled code. The syntax checks are similar to the syntax checks
for the user defined or built-in functions. Finally a QuoteInstructio n is created
by the 'normal' main code Generator. The use of this mysterious behaviour has
been partly explained in the previous part of this chapter and we'll get back to
it when we discuss the Instructions.

List expressions

There just one more part to explain. Our little language can handle lists.
A list is a bunch of expressions in curly brackets. Note that an expression
is a list so a list can have lists as their elements which can have lists as
their elements etc. etc.

This is how a list is parsed:

Expand|Select|Wrap|Line Numbers
  1. private void listExpression(Generator gen) throws InterpreterException {
  2.  
  3.     int arity;
  4.  
  5.     for (arity= 0;; ) {
  6.         if(expect("}")) 
  7.             if (arity == 0) break;
  8.             else throw new ParserException(tz, 
  9.                         "list expression error");
  10.         binaryExpression(gen, 0);
  11.         arity++;
  12.         if (expect(",")) continue;
  13.         demand("}"); 
  14.         break;
  15.     }
  16.  
  17.     gen.makeListInstruction(arity);
  18. }
  19.  
Basically this method keeps on calling the outer binaryExpressio n method to
collect all the elements of the list. It takes care that the syntax is correct
by calling the expect and demand methods at the right places (check it) and
finally the code Generator is asked to create a ListInstruction given the number
of elements in the list that has just been parsed. Note that the list can contain
other lists as its elements just because of all the recursion that is used.

Code

We've seen this mysterious Code object a couple of times. A Code object is
managed by the code Generator. A Code object is just this:

Expand|Select|Wrap|Line Numbers
  1. public class Code extends ArrayList<Instruction> {
  2.  
  3.     private static final long serialVersionUID = 3728486712655477743L;
  4.  
  5.     public static Code read(InputStream is) throws InterpreterException {
  6.  
  7.         try {
  8.             return (Code)new ObjectInputStream(is).readObject();
  9.         }
  10.         catch (Exception e) {
  11.             throw new InterpreterException("can't read code", e);
  12.         }
  13.     }
  14.  
  15.     public static Code read(String name) throws InterpreterException {
  16.  
  17.         FileInputStream fis= null;
  18.  
  19.         try {
  20.             return read(fis= new FileInputStream(name));
  21.         }
  22.         catch (IOException ioe) {
  23.             throw new InterpreterException("can't read code from: "+name, ioe);
  24.         }
  25.         finally {
  26.             try { fis.close(); } catch (Exception e) { }
  27.         }
  28.     }
  29.  
  30.     public void write(OutputStream os) throws InterpreterException {
  31.  
  32.         try {
  33.             new ObjectOutputStream(os).writeObject(this);
  34.         }
  35.         catch (Exception e) {
  36.             throw new InterpreterException("can't write code");
  37.         }
  38.     }
  39.  
  40.     public void write(String name) throws InterpreterException {
  41.  
  42.         FileOutputStream fos= null;
  43.  
  44.         try {
  45.             write(fos= new FileOutputStream(name));
  46.         }
  47.         catch (IOException ioe) {
  48.             throw new InterpreterException("can't write code to: "+name, ioe);
  49.         }
  50.         finally {
  51.             try { fos.close(); } catch (Exception e) { }
  52.         }
  53.     }
  54. }
  55.  
  56.  
As you can see a Code object is a List that contains Instructions. The class
contains a couple of convenience methods, i.e. it can serialize itself to an
OutputStream or a file and two static methods can read a serialized Code object
from an InputStream or a file again. Besides these methods the Code object just
is a List of Instructions. We'll talk about Instructions in a next part of this
article.

Note that whenever reading or writing fails, the convenience methods nicely wrapthe thrown exception in another InterpreterExce ption and let that one bubble up
to the callers of these methods. If the methods are passed a String it is supposed
to be the name of a file; otherwise the streams are used for reading or writing
but they're not closed. It's up to the caller to take care of that.

That's normal behaviour: if an object 'owns' something, it's responsible for it,
otherwise some other object (the owner) should take responsibility. When a file
name was passed in the Code object is the owner of the stream it creates and it
should close it when done with it.

Of course the read methods are static methods (there is no Code object yet) but
the write methods are not static: a Code object can serialize itself. Last, note
that funny number: it is needed for the Serialization framework in case of
different versions of the Code class.

Concluding remarks

As you might have noticed, Parsers can be reused, i.e. they can parse several
token streams given a Reader; they reuse the same Tokenizer over and over again.
We need this behaviour because a parser stores the information w.r.t. user
defined function and we want to use our parsers in an interactive environment
too where the user enters pieces of the text line by line. Parsers construct
the Tokenizer they use themselve; only classes (object thereof) in the same
package as where the Tokenizer class lives can instantiate a Tokenizer.

There is a nice convenience method available that runs in a one shot mode:
given the name of a file, it opens a stream and parses the entire text. We
also need that behaviour when we want to use the parser as a regular compiler
that compiles files and possibly saves the generated code to a file again.

That was a lot of code in this part of the Compilers article, but it was worth
it. We have implemented the entire parser for our compiler. We still can't do
anything usefull with this little language because there's the Generator to be
implemeented and finally there's an Interpreter; the thing that shows us whether
or not we've done things correctly in the previous stages of the construction of
the compiler system.

I hope you're still with me here after that entire avalanche of code for the
parsers. If you still are: congratulations ; you have seen how parsers can be
constructed from formal grammar rules on a one-to-one basis with a bit of
detail sprinkled in. We already made our tokenizer so we can parse source text
and check for syntactical correctness now. The next section of this article
will include the source code we have so far and a bit more. We'll see how those
expressions are compiled to something equivalent to postfix form expressions and
we're going to start working on the instructions that actually perform what we
want them to perform, i.e. evaluate the expressions that make up our program.

See you next week and

kind regards,

Jos
Jun 10 '07 #1
0 3183

Sign in to post your reply or Sign up for a free account.

Similar topics

1
6048
by: Will Stuyvesant | last post by:
There seems to be no XML parser that can do validation in the Python Standard Libraries. And I am stuck with Python 2.1.1. until my web master upgrades (I use Python for CGI). I know pyXML has validating parsers, but I can not compile things on the (unix) webserver. And even if I could, the compiler I have access to would be different than what was used to compile python for CGI. I need to write a CGI script that does XML validation...
2
3732
by: dwelch91 | last post by:
Hi, c.l.p.'ers- I am having a problem with the import of xml.parsers.expat that has gotten me completely stumped. I have two programs, one a PyQt program and one a command line (text) program that both eventually call the same code that imports xml.parsers.expat. Both give me different results... The code that gets called is (print statements have been added for debugging):
0
4784
by: JosAH | last post by:
Greetings, last week's tip was a bit of playtime where we've built a Sudoku solver. This week we're going to build some complicated stuff: a compiler. Compiler construction is a difficult branch of CS and I don't want the article(s) to be the size of a book, so we have to keep things a bit simple. On the other hand the entire thing must have practical use more or less so the code to be developed must be extensible so that you can...
0
3890
by: JosAH | last post by:
Greetings, this week we discuss the design of the syntactic aspects of our little language; it helps with the design for the parser(s) that recognize such syntax. Last week we saw the tokenizer: it just groups sequences of bytes into tokens; it doesn't know anything about any syntax of any language at all. That's the job of the parser; a parser on the other hand doesn't care where these tokens come from, nor how they were formed....
0
7905
by: JosAH | last post by:
Greetings, this week's compiler article is all about bookkeeping; boring, I admit it, but we need it for our Tokenizer and Parser(s). Two weeks ago I showed the Tokenizer class code. It uses a TokenTable which contains the data needed by the Tokenizer. The TokenTable contains the data alright, but how is this table initialized? I could've hard coded all data in that table but I didn't. I want to be able to play and alter that data...
0
4055
by: JosAH | last post by:
Greetings, Introduction This part of the article is one week late; I apologize for that; my excuse is: bizzy, bizzy, bizzy; I attended a nice course and I had to lecture a bit and there simply was no time left for writing. we've come a long way; the previous articles discussed the following aspects of compilers:
0
3366
by: JosAH | last post by:
Greetings, this week's article part discusses the parsers used for our little language. We will implement the parsers according to the grammar rules we defined in the second part of this article. As you will see shortly, the implementation of the parsers is an almost one-to-one translation of those grammar rules. Recursive descent parsing The grammar rules are highly recursive, i.e. one rule mentions another rule
0
3369
by: JosAH | last post by:
Greetings, last week we completed an important part of our compiler. We can parse and compile source text. Source texts contain a list of expressions and function definitions. Either compilation succeeds and we get a Code object back from our parser(s) or an InterpreterException is thrown indicating the cause of the error. Strictly speaking this Compiler article is finished now: all compilation details have been discussed in the...
0
3764
by: JosAH | last post by:
Greetings, welcome back to the second part of this week's tip. This article part shows you the Tokenizer class. Let's do it in small parts: public class Tokenizer { // default size of a tab character private static final int TAB= 8;
0
8130
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8573
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8541
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
8406
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
6057
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5510
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4021
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
2531
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
0
1389
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.