StringTokenizer skipping delimiters


DevX Home    Today's Headlines   Articles Archive   Tip Bank   Forums   

Results 1 to 4 of 4

Thread: StringTokenizer skipping delimiters

  1. #1
    Kristol Guest

    StringTokenizer skipping delimiters


    I've created a bean to parse through a pipe delimited file to load into a
    table. Whenever it encounters a null between pipes it skips to the next value.
    How do I force the tokenizer to recognize that null field as an actual field?
    Below is an example line from the file and the program.

    In the first line, count should return 14, it returns 11.

    Thanks for your help!

    30103|DENNIS DOLAN |BELL ATLANTIC|4530 BISHOP LN SUITE 108|LOUISVILLE|KY|40218|US||(502)
    421 - 3963|D1|CVG|||
    30214|PAUL DAY |BELL ATLANTIC|845 LANE ALLEN RD SUITE 8|LEXINGTON|KY|40504|US|
    |(606) 244 - 6343|D1|CVG| | |

    StringTokenizer token = new StringTokenizer(pRecord,"|", false);
    int holdCounter = 0;
    System.out.println("###count is " +token.countTokens() );

    while (token.hasMoreTokens() ){
    String s = token.nextToken();
    if (s!=null && s.length() !=0) { } else {s=" ";}
    System.out.println("###WHAT IS A TOKEN: CHECK TOKEN " +s);
    holdCounter = holdCounter + 1;
    }
    return holdCounter;


  2. #2
    Paul Clapham Guest

    Re: StringTokenizer skipping delimiters

    Use the other constructor that asks the tokenizer to return delimiters as
    tokens.

    PC2

    "Kristol" <Kristoltaylor@hotmail.com> wrote in message
    news:3c90a096$1@10.1.10.29...
    >
    > I've created a bean to parse through a pipe delimited file to load into a
    > table. Whenever it encounters a null between pipes it skips to the next

    value.
    > How do I force the tokenizer to recognize that null field as an actual

    field?
    > Below is an example line from the file and the program.
    >
    > In the first line, count should return 14, it returns 11.
    >
    > Thanks for your help!
    >
    > 30103|DENNIS DOLAN |BELL ATLANTIC|4530 BISHOP LN SUITE

    108|LOUISVILLE|KY|40218|US||(502)
    > 421 - 3963|D1|CVG|||
    > 30214|PAUL DAY |BELL ATLANTIC|845 LANE ALLEN RD SUITE

    8|LEXINGTON|KY|40504|US|
    > |(606) 244 - 6343|D1|CVG| | |
    >
    > StringTokenizer token = new StringTokenizer(pRecord,"|", false);
    > int holdCounter = 0;
    > System.out.println("###count is " +token.countTokens() );
    >
    > while (token.hasMoreTokens() ){
    > String s = token.nextToken();
    > if (s!=null && s.length() !=0) { } else {s=" ";}
    > System.out.println("###WHAT IS A TOKEN: CHECK TOKEN " +s);
    > holdCounter = holdCounter + 1;
    > }
    > return holdCounter;
    >




  3. #3
    Kristol Guest

    Re: StringTokenizer skipping delimiters


    Thanks for the help. I should have posted this originally. I've already tried
    that. It still skips the null fields. You'll get a count of the delimiters
    that have values in them plus the count of the values. No nulls. So using
    the first line of the file that I'd included, count is 25, the count should
    be 28 in this instance. This count is just the first step. I have to make
    sure the load file contains the correct number of columns before I can then
    load it into the table. If tokenizers skips nulls then my bigger problem
    will be loading data in the wrong columns.

  4. #4
    Paul Clapham Guest

    Re: StringTokenizer skipping delimiters

    Yes, that's right, count() is not going to be "right". You have to actually
    read the tokens -- which will include delimiters. This code can check for
    pairs of adjacent delimiters and behave as if there were null tokens between
    them.

    PC2

    "Kristol" <kristoltaylor@hotmail.com> wrote in message
    news:3c90ca98$1@10.1.10.29...
    >
    > Thanks for the help. I should have posted this originally. I've already

    tried
    > that. It still skips the null fields. You'll get a count of the delimiters
    > that have values in them plus the count of the values. No nulls. So using
    > the first line of the file that I'd included, count is 25, the count

    should
    > be 28 in this instance. This count is just the first step. I have to make
    > sure the load file contains the correct number of columns before I can

    then
    > load it into the table. If tokenizers skips nulls then my bigger problem
    > will be loading data in the wrong columns.




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
HTML5 Development Center
 
 
FAQ
Latest Articles
Java
.NET
XML
Database
Enterprise
Questions? Contact us.
C++
Web Development
Wireless
Latest Tips
Open Source


   Development Centers

   -- Android Development Center
   -- Cloud Development Project Center
   -- HTML5 Development Center
   -- Windows Mobile Development Center