memonic

Similarity Search for Content Matching | TextWise LLC

Save

The TextWise SemanticHacker API provides a match service call that analyzes the text or Web page provided in the call and returns a Semantic Signature and a match. Matching documents via their Semantic Signatures is similar, in principle, to matching documents via the terms contained in them. Instead of using the terms contained in the documents and their frequencies, our matching uses the dimensions in the documents’ Semantic Signatures and their associated weights. Matching determines the distance between the documents’ Semantic Signatures.

Much of the math involved in the comparison is performed in advance during the process of generating the semantic dictionary and the Semantic Signatures. The steps to matching are:

  • Finding the semantic dimensions that are shared between two documents.
  • Calculating a weight for the dimension (a similarity factor) for each shared semantic dimension.
  • Calculating a matching score. The higher the score, the more similar are the documents in the semantic space.
  1. Custom content matching - you can maintain an index of your own content on our cloud and request highly relevant matches for your Web or enterprise content against that index. Contact us for details. 
  2. Self-service content matching - Developers can match their Web or other documents against our constantly-updated indexes at no charge (subject to daily query limits). We currently have over 9.5 million items in our index - a complete list can be found in the Documentation.  

When searching in an index for the documents that best match an incoming document, the following operations are performed:

  • The document is filtered to remove HTML and boilerplate if the incoming document is a Web page.
  • A Semantic Signature is generated for the incoming document.
  • The Semantic Signature is converted to a weighted term query: each dimension ID is used as a query term, and the dimension’s weight in the signature is used as a weighting factor in the query.
  • At least one semantic dimension must be shared between two documents in order for them to have a match score greater than zero.

To achieve adequate performance, the TextWise matching system limits the numbers of semantic dimensions used for each document to the top 30 and uses a nonstandard, weight-ordered index to perform the search.

The following is an example of a match between two Web pages about the Hubble space telescope:

sample matching screen shot 1

sample matching screen shot 2

 

 

ocropus - The OCRopus(tm) open source document analysis and OCR system - Google Project Hosting

Save

OCRopus(tm) is a state-of-the-art document analysis and OCR system, featuring pluggable layout analysis, pluggable character recognition, statistical natural language modeling, and multi-lingual capabilities.

The OCRopus engine is based on two research projects: a high-performance handwriting recognizer developed in the mid-90's and deployed by the US Census bureau, and novel high-performance layout analysis methods.

There has been significant refactoring and cleanup over the last year.

  • OCRopus is now effectively usable as a NumPy library with native NumPy arrays
  • most of the APIs are documented through the Python interfaces
  • Unicode and ligature support largely works
  • all recognition can now be carried out from Python
  • there are top-level commands for recognition and training written in Python
  • classifiers now can cope with large character sets
  • there are tools for clustering and correcting character shapes
  • there is support for ligatures
  • there are numerous bug fixes
  • training is possible on very large datasets (many millions of samples)

What remains to be done before the next official release:

  • the Python tools do not yet do a good job at upper/lower case modeling
  • the language models need to be tested and improved

After that, we will be focusing on these issues:

  • we need to integrate the book-adaptive recognition tools into the Python code
  • the main loop of the RAST layout analysis will be rewritten in Python
  • there will be some new layout analysis that works for distorted pages
  • we need to integrate our orientation detection and text/image segmentation code
  • remove a lot of unused C++ code and consolidate iulib and ocropus C++ code
  • factor out some C++ and Python libraries into separate projects
  • iulib Library (you need to install this)
  • hOCR Tools -- tools for manipulating OCR output
  • DECAPOD -- camera-based document capture and tagged PDF generation
  • PyOpenFST -- Python bindings for OpenFST (for language modeling)

The following is the most important documentation:

  • Release Notes -- summary information about releases
  • Development Install -- how to install the development version of OCRopus
  • Using -- some information about how to use OCRopus
  • Training -- how to train OCRopus
  • Publications -- information about algorithms

If you want to contribute to the primary documentation, please check out hg clone https://wiki.ocropus.googlecode.com/hg and submit patches against the documentation.

Additional links you may find useful are here:

Please use the "Issues" tab above to submit bugs, feature requests, etc.

When submitting bug reports, please keep the following in mind:

  • include OCRopus version/hg changeset, OS version, compiler version
  • sample images that fail (tag with SampleImage if you attach an image)
  • stack trace from GDB if you can get that

Until the beta release (version 0.5) we mainly care about "big stuff" bug reports and failing documents; minor compile issues or cross-platform issues don't matter that much right now. Please also only recognition failures on fairly clean scanned documents for the time being.

If you want to contribute code to OCRopus, or if you have a patched version or variant, please use Google's Server Side Clone Support for Mercurial. You can maintain your own variant, add experimental features, etc., and share your patches/changes easily with others even if we haven't incorporated them into the main branch yet.

The system is combining the work of many contributors and previous projects. The core developers work at the IUPR research group at the DFKI and gratefully acknowledge funding by Google and the BMBF TextGrid project.

linux - extracting text from MS word files in python - Stack Overflow

Save

for working with MS word files in python, there is python win32 extensions, which can be used in windows. How do I do the same in linux? Is there any library?


60% accept rate

OpenOffice.org can be scripted with Python: see here.

Since OOo can load most MS Word files flawlessly, I'd say that's your best bet.

Take a look at how the doc format works and create word document using PHP in linux. The former is especially useful. Abiword is my recommended tool. There are limitations though:

However, if the document has complicated tables, text boxes, embedded spreadsheets, and so forth, then it might not work as expected. Developing good MS Word filters is a very difficult process, so please bear with us as we work on getting Word documents to open correctly. If you have a Word document which fails to load, please open a Bug and include the document so we can improve the importer.

I know this is an old question, but I was recently trying to find a way to extract text from MS word files, and the best solution by far I found was with wvLib:

http://wvware.sourceforge.net/

After installing the library, using it in Python is pretty easy:

import commands

exe = 'wvText ' + word_file + ' ' + output_txt_file
out = commands.getoutput(exe)
exe = 'cat ' + output_txt_file
out = commands.getoutput(exe)

And that's it. Pretty much, what we're doing is using the commands.getouput function to run a couple of shell scripts, namely wvText (which extracts text from a Word document, and cat to read the file output). After that, the entire text from the Word document will be in the out variable, ready to use.

Hopefully this will help anyone having similar issues in the future.

Use the native Python docx module. Here's how to extract all the text from a doc:

document = opendocx('Hello world.docx')

# This location is where most document content lives 
docbody = document.xpath('/w:document/w:body', namespaces=wordnamespaces)[0]

# Extract all text
print getdocumenttext(document)

See http://github.com/mikemaccana/python-docx

Parsing XML with regexs invokes cthulu. Don't do it!

I'm not sure if you're going to have much lock without using COM. The .doc format is ridiculously complex, and is often called a "memory dump" of Word at the time of saving!

At Swati, that's in HTML, which is fine and dandy, but most word documents aren't so nice!

(Note: I posted this on this question as well, but it seems relevant here, so please excuse the repost.)

Now, this is pretty ugly and pretty hacky, but it seems to work for me for basic text extraction. Obviously to use this in a Qt program you'd have to spawn a process for it etc, but the command line I've hacked together is:

unzip -p file.docx | grep 'w:t' | sed 's/[^]*//g' | grep -v '^[[:space:]]*$'

So that's:

unzip -p file.docx: -p == "unzip to stdout"

grep 'w:t': Grab just the lines containing 'w:t' (w:t is the Word 2007 XML element for "text", as far as I can tell)

sed 's/[^]//g'*: Remove everything inside tags

grep -v '^[[:space:]]$'*: Remove blank lines

There is likely a more efficient way to do this, but it seems to work for me on the few docs I've tested it with.

As far as I'm aware, unzip, grep and sed all have ports for Windows and any of the Unixes, so it should be reasonably cross-platform. Despit being a bit of an ugly hack ;)

If your intention is to use purely python modules without calling a subprocess, you can use the zipfile python modude.

content = ""
# Load DocX into zipfile
docx = zipfile.ZipFile('/home/whateverdocument.docx')
# Unpack zipfile
unpacked = docx.infolist()
# Find the /word/document.xml file in the package and assign it to variable
for item in unpacked:
    if item.orig_filename == 'word/document.xml':
        content = docx.read(item.orig_filename)

    else:
        pass

Your content string however needs to be cleaned up, one way of doing this is:

# Clean the content string from xml tags for better search
fullyclean = []
halfclean = content.split('')
for item in halfclean:
    if '' in item:
        bad_good = item.split('')
        if bad_good[-1] != '':
            fullyclean.append(bad_good[-1])
        else:
            pass
    else:
        pass

# Assemble a new string with all pure content
content = " ".join(fullyclean)

But there is surely a more elegant way to clean up the string, probably using the re module. Hope this helps.

Antiword: a free MS Word document reader

Save

Among the platforms happily ignored by Microsoft, is -naturally- RISC OS, the platform that goes with the computers that were made by Acorn computers Ltd. of Cambridge in the UK. Today the platform is a much alive as ever, thanks to RISC OS Ltd.

Currently Antiword is able to save Word documents in Text (fff) and Draw (aff) format.

Click to download version 0.36 (16 Oct 2004) (size 129992 bytes) of Antiword.

The new version of Antiword is 26/32 bit neutral. This version will also run on a RiscPC with recent modules. Click to download version 0.37 (21 Oct 2005) (size 129023 bytes) of Antiword.

Antiword has been tested on:

  • A RiscPC, 33 MB, RO 3.6
  • A StrongArm RiscPC, 50 MB, RO 4.0

The programmers' version is released under GNU General Public License. Check out the pages of the Free Software Foundation for a more detailed description about this license.

The programmers' version does not contain any binaries, but the sources can be used to compile a Linux version. The sources can also be used to compile a version for most variations of the Unix operating system as well.
Users have reported successful compilations on FreeBSD, Solaris, IRIX, Digital Unix (OSF/1), AIX, SCO and HP-UX.

Currently Antiword is able to convert Word documents to plain text, to PostScript, to PDF and to XML/DocBook.

Please remember:
the conversion to XML/DocBook is still experimental,
the support for the Cyrillic alphabet is still experimental.

Click to download version 0.37 (21 Oct 2005) (size 317884 bytes) of Antiword.

Note that his version of Antiword has only been tested on:

  • A PC with SuSE GNU/Linux 9.0 (kernel 2.4.21)
  • A PC with SuSE GNU/Linux 9.2 (kernel 2.6.8)

Apache POI - the Java API for Microsoft Documents

Save

The Apache POI team is pleased to announce the release of 3.8 beta 2. This includes a large number of bug fixes and enhancements.

A full list of changes is available in the change log. People interested should also follow the dev mailing list to track further progress.

See the downloads page for more details.

The Apache POI team is pleased to announce the release of 3.7. This includes a large number of bug fixes, and some enhancements (especially text extraction). See the full release notes for more details.

A full list of changes is available in the change log. People interested should also follow the dev mailing list to track further progress.

See the downloads page for more details.

The Apache POI Project's mission is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft's OLE 2 Compound Document format (OLE2). In short, you can read and write MS Excel files using Java. In addition, you can read and write MS Word and MS PowerPoint files using Java. Apache POI is your Java Excel solution (for Excel 97-2008). We have a complete API for porting other OOXML and OLE2 formats and welcome others to participate.

OLE2 files include most Microsoft Office files such as XLS, DOC, and PPT as well as MFC serialization API based file formats. The project provides APIs for the OLE2 Filesystem (POIFS) and OLE2 Document Properties (HPSF).

Office OpenXML Format is the new standards based XML file format found in Microsoft Office 2007 and 2008. This includes XLSX, DOCX and PPTX. The project provides a low level API to support the Open Packaging Conventions using openxml4j.

For each MS Office application there exists a component module that attempts to provide a common high level Java api to both OLE2 and OOXML document formats. This is most developed for Excel workbooks (SS=HSSF+XSSF). Work is progressing for Word documents (HWPF+XWPF) and PowerPoint presentations (HSLF+XSLF).

The project has recently added support for Outlook (HSMF). Microsoft opened the specifications to this format in October 2007. We would welcome contributions.

There are also projects for Visio (HDGF), TNEF (HMEF), and Publisher (HPBF).

As a general policy we collaborate as much as possible with other projects to provide this functionality. Examples include: Cocoon for which there are serializers for HSSF; Open Office.org with whom we collaborate in documenting the XLS format; and Tika / Lucene, for which we provide format interpretors. When practical, we donate components directly to those projects for POI-enabling them.

A major use of the Apache POI api is for Text Extraction applications such as web spiders, index builders, and content management systems.

So why should you use POIFS, HSSF or XSSF?

You'd use POIFS if you had a document written in OLE 2 Compound Document Format, probably written using MFC, that you needed to read in Java. Alternatively, you'd use POIFS to write OLE 2 Compound Document Format if you needed to inter-operate with software running on the Windows platform. We are not just bragging when we say that POIFS is the most complete and correct implementation of this file format to date!

You'd use HSSF if you needed to read or write an Excel file using Java (XLS). You'd use XSSF if you need to read or write an OOXML Excel file using Java (XLSX). The combined SS interface allows you to easily read and write all kinds of Excel files (XLS and XLSX) using Java.

The Apache POI Project provides several component modules some of which may not be of interest to you. Use the information on our Components page to determine which jar files to include in your classpath.

So you'd like to contribute to the project? Great! We need enthusiastic, hard-working, talented folks to help us on the project. So if you're motivated, ready, and have the time time download the source from the Subversion Repository, build the code, join the mailing lists and we'll be happy to help you get started on the project!

Please read our Contribution Guidelines. When your contribution is ready submit a patch to our Bug Database.

by Andrew C. Oliver, Glen Stampoultzis, Avik Sengupta, Rainer Klute, David Fisher

PDFMiner

Save

Last Modified: Sun Feb 27 10:51:18 UTC 2011

Python PDF parser and analyzer

Homepage   Recent Changes   PDFMiner API

PDFMiner is a tool for extracting information from PDF documents. Unlike other PDF-related tools, it focuses entirely on getting and analyzing text data. PDFMiner allows to obtain the exact location of texts in a page, as well as other information such as fonts or lines. It includes a PDF converter that can transform PDF files into other text formats (such as HTML). It has an extensible PDF parser that can be used for other purposes instead of text analysis.

  • Written entirely in Python. (for version 2.4 or newer)
  • Parse, analyze, and convert PDF documents.
  • PDF-1.7 specification support. (well, almost)
  • CJK languages and vertical writing scripts support.
  • Various font types (Type1, TrueType, Type3, and CID) support.
  • Basic encryption (RC4) support.
  • PDF to HTML conversion (with a sample converter web app).
  • Outline (TOC) extraction.
  • Tagged contents extraction.
  • Reconstruct the original layout by grouping text chunks.

PDFMiner is about 20 times slower than other C/C++-based counterparts such as XPdf.

Online Demo: (pdf -> html conversion webapp)
http://pdf2html.tabesugi.net:8080/

Source distribution:
http://pypi.python.org/pypi/pdfminer/

github:
https://github.com/euske/pdfminer/

Questions and comments:
http://groups.google.com/group/pdfminer-users/

  1. Install Python 2.4 or newer. (Python 3 is not supported.)
  2. Download the PDFMiner source.
  3. Unpack it.
  4. Run setup.py to install:
    # python setup.py install
    
  5. Do the following test:
    $ pdf2txt.py samples/simple1.pdf
    Hello
    
    World
    
    Hello
    
    World
    
    H e l l o
    
    W o r l d
    
    H e l l o
    
    W o r l d
    
  6. Done!

In order to process CJK languages, you need an additional step to take during installation:

# make cmap
python tools/conv_cmap.py pdfminer/cmap Adobe-CNS1 cmaprsrc/cid2code_Adobe_CNS1.txt cp950 big5
reading 'cmaprsrc/cid2code_Adobe_CNS1.txt'...
writing 'CNS1_H.py'...
...
(this may take several minutes)

# python setup.py install

On Windows machines which don't have make command, paste the following commands on a command line prompt:

python tools\conv_cmap.py pdfminer\cmap Adobe-CNS1 cmaprsrc\cid2code_Adobe_CNS1.txt cp950 big5
python tools\conv_cmap.py pdfminer\cmap Adobe-GB1 cmaprsrc\cid2code_Adobe_GB1.txt cp936 gb2312
python tools\conv_cmap.py pdfminer\cmap Adobe-Japan1 cmaprsrc\cid2code_Adobe_Japan1.txt cp932 euc-jp
python tools\conv_cmap.py pdfminer\cmap Adobe-Korea1 cmaprsrc\cid2code_Adobe_Korea1.txt cp949 euc-kr
python setup.py install

PDFMiner comes with two handy tools: pdf2txt.py and dumppdf.py.

pdf2txt.py extracts text contents from a PDF file. It extracts all the texts that are to be rendered programmatically, ie. text represented as ASCII or Unicode strings. It cannot recognize texts drawn as images that would require optical character recognition. It also extracts the corresponding locations, font names, font sizes, writing direction (horizontal or vertical) for each text portion. You need to provide a password for protected PDF documents when its access is restricted. You cannot extract any text from a PDF document which does not have extraction permission.

Note: Not all characters in a PDF can be safely converted to Unicode.

$ pdf2txt.py -o output.html samples/naacl06-shinyama.pdf
(extract text as an HTML file whose filename is output.html)

$ pdf2txt.py -V -c euc-jp -o output.html samples/jo.pdf
(extract a Japanese HTML file in vertical writing, CMap is required)

$ pdf2txt.py -P mypassword -o output.txt secret.pdf
(extract a text from an encrypted PDF file)
-o filename
Specifies the output file name. By default, it prints the extracted contents to stdout in text format.
-p pageno[,pageno,...]
Specifies the comma-separated list of the page numbers to be extracted. Page numbers are starting from one. By default, it extracts texts from all the pages.
-c codec
Specifies the output codec.
-t type
Specifies the output format. The following formats are currently supported.
  • text : TEXT format. (Default)
  • html : HTML format. Not recommended for extraction purposes because the markup is messy.
  • xml : XML format. Provides the most information available.
  • tag : "Tagged PDF" format. A tagged PDF has its own contents annotated with HTML-like tags. pdf2txt tries to extract its content streams rather than inferring its text locations. Tags used here are defined in the PDF specification (See §10.7 "Tagged PDF").
-I image_directory
Specifies the output directory for image extraction. Currently only JPEG images are supported.
-M char_margin
-L line_margin
-W word_margin
These are the parameters used for layout analysis. In an actual PDF file, texts might be split into several chunks in the middle of its running, depending on the authoring software. Therefore, text extraction needs to splice text chunks. In the figure below, two text chunks whose distance is closer than the char_margin (shown as M) is considered continuous and get grouped into one. Also, two lines whose distance is closer than the line_margin (L) is grouped as a text box, which is a rectangular area that contains a "cluster" of texts. Furthermore, it may be required to insert blank characters (spaces) as necessary if the distance between two words is greater than the word_margin (W), as a blank between words might not be represented as a space, but indicated by the positioning of each word.

Each value is specified not as an actual length, but as a proportion of the length to the size of each character in question. The default values are M = 1.0, L = 0.3, and W = 0.2, respectively.

M
Q u i c k b r o w n   f o x
W L
-n
Suppress layout analysis.
-A
Forces to perform layout analysis for all the text strings, including texts contained in figures.
-V
Allows vertical writing detection.
-Y layout_mode
Specifies how the page layout should be preserved. (Currently only applies to HTML format.)
  • exact : preserve the exact location of each individual character (a large and messy HTML).
  • normal : preserve the location and line breaks in each text block. (Default)
  • loose : preserve the overall location of each text block.
-s scale
Specifies the output scale. Can be used in HTML format only.
-m maxpages
Specifies the maximum number of pages to extract. By default, it extracts all the pages in a document.
-P password
Provides the user password to access PDF contents.
-d
Increases the debug level.

dumppdf.py dumps the internal contents of a PDF file in pseudo-XML format. This program is primarily for debugging purposes, but it's also possible to extract some meaningful contents (such as images).

$ dumppdf.py -a foo.pdf
(dump all the headers and contents, except stream objects)

$ dumppdf.py -T foo.pdf
(dump the table of contents)

$ dumppdf.py -r -i6 foo.pdf > pic.jpeg
(extract a JPEG image)
-a
Instructs to dump all the objects. By default, it only prints the document trailer (like a header).
-i objno,objno, ...
Specifies PDF object IDs to display. Comma-separated IDs, or multiple -i options are accepted.
-p pageno,pageno, ...
Specifies the page number to be extracted. Comma-separated page numbers, or multiple -p options are accepted. Note that page numbers start from one, not zero.
-r (raw)
-b (binary)
-t (text)
Specifies the output format of stream contents. Because the contents of stream objects can be very large, they are omitted when none of the options above is specified.

With -r option, the "raw" stream contents are dumped without decompression. With -b option, the decompressed contents are dumped as a binary blob. With -t option, the decompressed contents are dumped in a text format, similar to repr() manner. When -r or -b option is given, no stream header is displayed for the ease of saving it to a file.

-T
Shows the table of contents.
-P password
Provides the user password to access PDF contents.
-d
Increases the debug level.
  • 2010/02/27: Bugfixes and layout analysis improvements. Thanks to fujimoto.report.
  • 2010/12/26: A couple of bugfixes and minor improvements. Thanks to Kevin Brubeck Unhammer and Daniel Gerber.
  • 2010/10/17: A couple of bugfixes and minor improvements. Thanks to standardabweichung and Alastair Irving.
  • 2010/09/07: A minor bugfix. Thanks to Alexander Garden.
  • 2010/08/29: A couple of bugfixes. Thanks to Sahan Malagi, pk, and Humberto Pereira.
  • 2010/07/06: Minor bugfixes. Thanks to Federico Brega.
  • 2010/06/13: Bugfixes and improvements on CMap data compression. Thanks to Jakub Wilk.
  • 2010/04/24: Bugfixes and improvements on TOC extraction. Thanks to Jose Maria.
  • 2010/03/26: Bugfixes. Thanks to Brian Berry and Lubos Pintes.
  • 2010/03/22: Improved layout analysis. Added regression tests.
  • 2010/03/12: A couple of bugfixes. Thanks to Sean Manefield.
  • 2010/02/27: Changed the way of internal layout handling. (LTTextItem -> LTChar)
  • 2010/02/15: Several bugfixes. Thanks to Sean.
  • 2010/02/13: Bugfix and enhancement. Thanks to André Auzi.
  • 2010/02/07: Several bugfixes. Thanks to Hiroshi Manabe.
  • 2010/01/31: JPEG image extraction supported. Page rotation bug fixed.
  • 2010/01/04: Python 2.6 warning removal. More doctest conversion.
  • 2010/01/01: CMap bug fix. Thanks to Winfried Plappert.
  • 2009/12/24: RunLengthDecode filter added. Thanks to Troy Bollinger.
  • 2009/12/20: Experimental polygon shape extraction added. Thanks to Yusuf Dewaswala for reporting.
  • 2009/12/19: CMap resources are now the part of the package. Thanks to Adobe for open-sourcing them.
  • 2009/11/29: Password encryption bug fixed. Thanks to Yannick Gingras.
  • 2009/10/31: SGML output format is changed and renamed as XML.
  • 2009/10/24: Charspace bug fixed. Adjusted for 4-space indentation.
  • 2009/10/04: Another matrix operation bug fixed. Thanks to Vitaly Sedelnik.
  • 2009/09/12: Fixed rectangle handling. Able to extract image boundaries.
  • 2009/08/30: Fixed page rotation handling.
  • 2009/08/26: Fixed zlib decoding bug. Thanks to Shon Urbas.
  • 2009/08/24: Fixed a bug in character placing. Thanks to Pawan Jain.
  • 2009/07/21: Improvement in layout analysis.
  • 2009/07/11: Improvement in layout analysis. Thanks to Lubos Pintes.
  • 2009/05/17: Bugfixes, massive code restructuring, and simple graphic element support added. setup.py is supported.
  • 2009/03/30: Text output mode added.
  • 2009/03/25: Encoding problems fixed. Word splitting option added.
  • 2009/02/28: Robust handling of corrupted PDFs. Thanks to Troy Bollinger.
  • 2009/02/01: Various bugfixes. Thanks to Hiroshi Manabe.
  • 2009/01/17: Handling a trailer correctly that contains both /XrefStm and /Prev entries.
  • 2009/01/10: Handling Type3 font metrics correctly.
  • 2008/12/28: Better handling of word spacing. Thanks to Christian Nentwich.
  • 2008/09/06: A sample pdf2html webapp added.
  • 2008/08/30: ASCII85 encoding filter support.
  • 2008/07/27: Tagged contents extraction support.
  • 2008/07/10: Outline (TOC) extraction support.
  • 2008/06/29: HTML output added. Reorganized the directory structure.
  • 2008/04/29: Bugfix for Win32. Thanks to Chris Clark.
  • 2008/04/27: Basic encryption and LZW decoding support added.
  • 2008/01/07: Several bugfixes. Thanks to Nick Fabry for his vast contribution.
  • 2007/12/31: Initial release.
  • 2004/12/24: Start writing the code out of boredom...
  • PEP-8 and PEP-257 conformance.
  • Better documentation.
  • Better text extraction / layout analysis. (writing mode detection, Type1 font file analysis, etc.)
  • Robust error handling.
  • Crypt stream filter support. (More sample documents are needed!)
  • CCITTFax stream filter support.

(This is so-called MIT/X License)

Copyright (c) 2004-2010 Yusuke Shinyama <yusuke at cs dot nyu dot edu>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


Yusuke Shinyama (yusuke at cs dot nyu dot edu)