Meeting setup
- Date: 02.04.2007
- Time: 09.30 Norw. time
- Place: Internet
- Tools: SubEthaEdit, iChat
Agenda
- Opening, agenda review
- Reviewing the task list from last week
- Documentation - divvun.no
- Corpus gathering
- Corpus infrastructure
- Infrastructure
- Linguistics
- name lexicon infrastructure
- Spellers
- Other issues
- Summary, task lists
- Closing
1. Opening, agenda review, participants
Opened at 09:36.
Present: Saara, Sjur, Steinar, Tomi, Trond
Absent: Børre, Maaren, Thomas
Agenda accepted as is.
2. Updated task status since last meeting
Børre
- update and fix our documentation and infrastructure as Steinar finds
problem areas - low priority
- find missing
nob
parallel texts in corpus
- add prefixes to the PLX conversion
- improve number PLX conversion
- add
sma
texts to the corpus repository
- improve automatic alignment process
- add potential speller test texts
- collect a list of PR recipients, forward to Berit Karen Paulsen
- add version info to the generated speller lexicons
- run all known spelling errors in the corpus through the speller
- test the
typos.txt
list, and check that all entries are properly corrected
- consider how to do a regression self-test
- add extraction of all known spelling errors in the regular corpus (not the
prooftest
corpus), and check that they are properly corrected
- fix bugs!
Maaren
- lexicalise actio compounds
- Manually mark speller test documents for typos
Saara
- prepare more files for manual alignment
- mark-up the added speller test texts, using our existing xml format
- improved markup almost ready.
- improve cgi-bin scripts
- add new features to the paradigm generator
- add new XSL/XML headers for proofing test docs
- continue with speller test data
- done as specified this far. next the xml-format? Yes, please:)
- compilation of verb lists
- not done, will do this week.
- speed in smj conversion
- not done, is this still a problem?
- fix bugs!
- other
- some reformatting of prooftest-MinAigi filenames
Sjur
- hire linguist
- done! (some bureaucracy left)
- finish press release for the beta
- collect a list of PR recipients
- add version info to the generated speller lexicons
- done - could still need some improvement
- run all known spelling errors in the corpus through the speller
- we’re almost able to do that:)
- consider how to do a regression self-test
- document the AppleScript testing tool
- write tools for statistical analysis of test results
- make improved
smj
speller (incl. derivations and compounds)
- worked on it a lot last week, but we encountered several hurdles with the
PLX conversion
- fix bugs!
Steinar
- Beta testing: Align manually (shorter texts)
- Manually mark speller test texts for typos (making them into gold standards),
add the texts to the orig/catalogue
- some texts finished, but they are not added to the orig/catalogue yet
- Complete the semantic sets in sme-dis.rle
- missing lists
- added missing literature terms
- Look at the actio compound issue when adding from missing lists
- Align corpus manually
- fix bugs!
Thomas
- work with compounding
- Lack of lowering before hyphen: Twol rewrite.
- translate beta release docs to
sme
and smj
- Add potential speller test texts
- fix bugs!
Tomi
- make improved
smj
speller (incl. derivations and compounds)
- make PLX conversion test sample; add conversion testing to the make file
- improve number PLX conversion
- update
ccat
to handle error/correction markup
- add version info to the generated speller lexicons
- fix bugs!
- other
- Added nouns and adjectives to xfst based PLX conversion
Trond
- Test the beta versions
- Work on the parallel corpus issues
- Intensive work on the anchor list.
- Discuss with Anders
- Work on the aligner with (Børre)
***Done, still more to do.
- fix
sme
texts in corpus this month
- find missing
nob
parallel texts in corpus, go through Saara’s list
- Postpone these tasks to after the beta:
- update the
smj
proper noun lexicon, and refine the morphological
analysis, cf. the propernoun-smj-lex.txt
- Go through the Num bugs
- Improve automatic alignment process
- Align corpus manually
- Add potential speller test texts
- collect a list of PR recipients
- fix bugs!.
3. Documentation
The open documentation issues fall into these three categories:
- Beta documentation for testers
- Documentation for the online corpora
- General documentation improvement after Steinar’s test (for open-source
release)
TODO:
- write form to request corpus user account (Børre, Sjur, Trond)
- delayed till after the beta release
- document how to apply for access to closed corpus, and details on the corpus
and its use in general (Børre, Sjur, Trond)
- delayed till after the beta release
- correct and improve it based on feedback from Steinar (Børre)
- beta documentation (see separate beta section below)
4. Corpus gathering
TODO:
sme
texts: no new additions, fix corpus errors during this month
(Børre, Trond, Saara)
- missing
nob
parallel texts should be added if such holes are found
(Børre, Trond)
- Go through the list of missing or errouneous
nob
texts, based upon
Saara’s perfect list (Børre, Trond)
- add
sma
texts to the corpus repository (Børre)
5. Corpus infrastructure
Alignment
Trond has worked a lot with improving the anchor list, it was fruitful to do it.
The anchor list still should be improved, though. It might be useful to run the
alignment again.
There is a bug in the web interface, leading to wrong alignment: one sentence
off in some but not all cases. Trond has discussed this with Lars.
TODO
- go through other directories (nob dicrectories, sd directories), fix
parallellity information for other documents (2 hours)
(Børre)
- Improve the automatic process:
- Improve the anchor list and realign (Trond, Børre)
- Only adding words does not improve alignment, you have to consider the format
as well. If you cut with star e.g guo* too early, wrong word can be selected.
- The documents have still some formatting issues which cause trouble in
alignment. (In some documents tables are included to the text, some not,
etc.)
- Test and improve settings in the aligner
- Align manually (Trond, Steinar) (especially shorter terminological texts)
good idea. Saara will look for troublesome texts and prepare them for
manual alignment.
6. Infrastructure
TODO:
- update and fix our documentation and infrastructure as Steinar finds
problem areas (Børre)
7. Linguistics
North Sámi
TODO:
- lexicalise actio compounds. Example: vuolggasadji vs. vuolginsadji
(Maaren)
- fix stuorra-oslolaš lower case
o
(Sjur, Thomas, Trond)
- postponed till after the public beta
Lule Sámi
TODO:
- refine
smj
proper noun lexica, cf. the propernoun-smj-lex.txt
(Thomas, Trond)
8. Name lexicon infrastructure
Decisions made in Tromsø can be found in [this meeting
memo.|/admin/physical_meetings/tromso-2006-08-propnoun.html]
TODO:
- fix bugs in lexc2xml; add comments to the log element (Saara)
- finish first version of the editing (Sjur)
- test editing of the xml files. If ok, then: (Sjur, Thomas, Trond)
- make terms-smX.xml <=== automatically from propernoun-sme-lex.xml (add nob as
well) (the morphological section should be kept intact, in e.g.
propernoun-sme-morph.txt) (Sjur, Saara)
- convert propernoun-($lang)-lex.txt to a derived file from common xml files
(Sjur, Tomi, Saara)
- implement data synchronisation between risten.no and
the cvs repo, and possibly other servers (ie the G5 as an alternative server
to the public risten.no - it might be faster and better suited than the
official one; also local installations could be treated the same way)
- start to use the xml file as source file
- clean terms-sme.xml such that all names have the correct tag for their use
(e.g. @type=secondary) (Thomas, Maaren, linguists)
- merge placenames which are errouneously in different entries: e.g. Helsinki,
Helsingfors, Helsset (linguists)
- publish the name lexicon on risten.no (Sjur)
- add missing parallel names for placenames (linguists)
- add informative links between first names like Niillas and Nils
(linguists)
9. Spellers
OOo speller(s)
TODO after the MS Office Beta is delivered:
- add Hunspell data generation to the lexc2xspell (Tomi - after the
PLX data generation is finished)
- study the Hunspell formalism in detail (Børre, Sjur, Tomi)
Testing
Selecting test texts
In principle, we need the same text types as the ones we already aim at in our
corpus. In practice, we must use new, unseen texts. We would like to have a
balanced input of texts, but right now:
- Min Áigi net issue
- Blogs
- Our own linguistic texts
- New department texts from the net
Spelling Error Markup
Procedure for marking up:
- pick a file in:
/home/steinarn/gt/sme/zcorp/prooftest/bound/sme/news/MinAigi/
, e.g.:
index2.php_id_647.html.xml
- rename it from
.xml
to .correct.xml
:
index2.php_id_647.html.correct.xml
- copy to your own computer
- open in SEE or XMLEditor
- add manual markup according to the established convention
- when done, copy the file back to victorio - see dir structure below
Directory structure and file locations for manually corrected files:
1 prooftest/.../orig/file.html loading
1 prooftest/.../orig/file.html.xsl converting
2 prooftest/.../bound/file.html.xml to this file, copying back to orig as 3a
3a prooftest/.../orig/file.html.correct.xml speling§spelling working on this
manually, using RCS to check in each generation of manual markup
3b prooftest/.../bound/file.html.xml <error corr="spelling">speling</error>
- missing: last changes in eror$error => conversion, + last ccat option
TODO:
- Manually mark test texts for typos (making them into gold standards)
(Steinar)
erorr§errors
- when correcting to multiple strings:
erorr.Og§(error. Og)
- update correction markup xml conversion to handle the second case
(Saara)
- change
ccat
to handle error/correction markup (Tomi)
- extract the document text with original errors (input to standard speller
test) - default? yes (= the old behaviour)
- extract the document text with the available corrections (the correct docu),
to be used to automatically evaluate testing tool output; could also be used
to give the corpus tagger an easier job:-)
- extract all and only the spelling errors with their corrections, in a tab
separated file:
erorr<TAB>correction
- just as typos.txt. Used for
regression testing (making sure we don’t take steps back)
- extract the whole text, both the original text and their corrections, in a
tab separated file:
erorr<TAB>correction
- just as typos.txt. Used for
another type of speller testing. correction
should be an emtpy string if
there is no correction.
- Set up ways of adding meta-information (source info, used in testing or not,
added to lexicon or not)
- add the wanted xml elements to the XSL header (Saara) (source info is
the same as for regular docs):
- outworned (ie not suitable for speller testing any more, only for regression
testing)
- lexicalised (all unknown, correctly spelled words added to lexicon)
- Conduct tests on new beta versions on the basis of the unspoiled gold standard
documents (whoever has time), and fill in data from testing (the testers:
who?)
- alternatively: make test scripts that will run the tests automatically,
collect the numbers, and transform them into test results (Sjur)
- include the ones already tested in the
testing/
catalogue
- test each version before beta release
TODO:
- document the AppleScript testing tool (Sjur)
- write tools for statistical analysis of test results (Sjur)
- started, Saara continued, will continue with Forrest integration
- Saara found a bug in the Polderland speller tool, now reported to them
Regression tests
TODO:
- add extraction of all known spelling errors in the corpus (not the
prooftest
corpus), and check that they are properly corrected
(Børre, Sjur)
ccat
now ready, it should be integrated in the Makefile (Sjur, Tomi)
- test the
typos.txt
list, and check that all entries are properly corrected
(Børre, Sjur)
- consider how to do a regression self-test, ie, how to test the full
wordlist (Børre, Sjur)
- extract all the base forms in the lexicon, and run them through the speller
- extract all SUB-marked entries, and run them through the lexicon
- integrate these in the make file (Sjur)
Localisation
We need to translate the info added to our front page (and a separate page)
regarding the beta release. Also the press release needs to be translated.
TODO:
- translate beta release docs to
sme
(Thomas)
- translate beta release docs to
smj
(Thomas)
Postverbal clitics
How to include compounding restriction comment tags in the transducers:
giv0ri:giv'ri ALBMI ; !+SgNomCmp +SgGenCmp +PlGenCmp
=> (using a perl script or similar)
+SgNomCmp+SgGenCmp+PlGenCmpgiv0ri:giv'ri ALBMI ; !
TODO:
- improve prefix conversion to PLX (Tomi)
- improve middle noun conversion to PLX (Tomi)
- improve noun + adjective PLX conversion: (Tomi)
- compounding stems - how do we generate them? Using the java client?
+SgNomCmp+Cmpnd
= sáme–
, should give the correct compounding stem,
shouldn’t it? We want to optionally go from: sáme- NLI
to
sáme NL
: - NLI (->) NL
- compounding tags - we need to obey them when making the transducers.
Suggestion - see above.
- add propernouns to xfst-based conversion
- make conversion test sample; add conversion testing to the make file
(Tomi)
- improve number conversion (Børre, Tomi)
- run xfst-based PLX conversion on victorio, make the result available on our
public server (Saara, Sjur)
Public Beta release
Due to the problems with generating the PLX files discussed above, we need to
move the release date further. Mid-April, before the “physical” meeting.
DONE:
- delivered PLX data of
sme
and smj
including compounding
- translated Windows installer to
sme
and smj
- installed PLX compiler in G5 at
/usr/local/bin/mklex*
(one version for
sme
and one for smj
)
- added resources needed for compiling PLX lexicons to our cvs repo
- tested the beta drop from Polderland - good we did, it is absolutely
unacceptable (our responsibiliby - only linguistic errors (poor coverage)
found so far)
- add compilation of MS Office spellers part of the Makefile
- install Windows and MS Office; test tools on Windows
TODO:
- improved
smj
speller (incl. derivations and compounds) (Sjur, Tomi)
- finish press release (Sjur)
- add info to front page (incl. download links) (Børre)
- write separate page with detailed info (incl. download links) (Børre)
- translate press release, web pages (Børre, Thomas, whoever)
- collect a list of PR recipients, forward to Berit Karen Paulsen
(Børre, Sjur, Trond)
Version identification of speller lexicons
TODO:
- add version info to the generated speller lexicons
- done - still needs to be improved by automatically add the date of the
compilation (Sjur)
10. Other
Project meeting IRL
The planned gathering will have to be on 16.-20.4., in Kárásjohki or
Guovdageaidnu. All of Divvun should participate, and some from the UiTø
project as well.
Corpus contracts
TODO:
- publish corpus contracts and project infra as open-source on NoDaLi-sta
(Sjur)
- delayed until the public beta is out the door
Bug fixing
51 open Divvun/Disamb bugs, and 23 risten.no bugs
Easter
- Børre: away
- Maaren: away
- Saara: working
- Sjur: working
- Thomas: away 2/4 - 4/4
- Tomi: At work, in Karigasniemi
- Trond: working
11. Next meeting, closing
The next meeting is 2.4.2007, 09:30 Norwegian time.
The meeting was closed at 12:40.
Appendix - task lists for the next week
Boerre
- update and fix our documentation and infrastructure as Steinar finds
problem areas - low priority
- find missing
nob
parallel texts in corpus
- improve number PLX conversion
- add
sma
texts to the corpus repository
- improve automatic alignment process
- collect a list of PR recipients, forward to Berit Karen Paulsen
- run all known spelling errors in the corpus through the speller
- add extraction of all known spelling errors in the regular corpus (not the
prooftest
corpus), and check that they are properly corrected
- fix bugs!
Maaren
- lexicalise actio compounds
- Manually mark speller test documents for typos
Saara
- prepare more files for manual alignment
- mark-up the added speller test texts, using our existing xml format
- improve cgi-bin scripts
- add new features to the paradigm generator
- add new XSL/XML headers for proofing test docs
- continue with speller test data
- compilation of verb lists
- speed in smj conversion
- fix bugs!
Sjur
- finish press release for the beta
- collect a list of PR recipients
- improve version info in the speller lexicons
- run all known spelling errors in the corpus through the speller
- document the AppleScript testing tool
- write tools for statistical analysis of test results
- integrate regression self tests with the make file
- make improved
smj
speller (incl. derivations and compounds)
- fix bugs!
Steinar
- Beta testing: Align manually (shorter texts)
- Manually mark speller test texts for typos (making them into gold standards),
add the texts to the orig/catalogue
- Complete the semantic sets in sme-dis.rle
- missing lists
- Look at the actio compound issue when adding from missing lists
- Align corpus manually
- fix bugs!
Thomas
- work with compounding
- Lack of lowering before hyphen: Twol rewrite.
- translate beta release docs to
sme
and smj
- Add potential speller test texts
- fix bugs!
Tomi
- make improved
smj
speller (incl. derivations and compounds)
- make PLX conversion test sample; add conversion testing to the make file
- improve number PLX conversion
- improve prefix and middle-noun PLX conversion
- fix bugs!
Trond
- Test the beta versions
- Work on the parallel corpus issues
- Discuss with Anders
- Work on the aligner with (Børre)
- fix
sme
texts in corpus this month
- find missing
nob
parallel texts in corpus, go through Saara’s list
- Postpone these tasks to after the beta:
- update the
smj
proper noun lexicon, and refine the morphological
analysis, cf. the propernoun-smj-lex.txt
- Go through the Num bugs
- Improve automatic alignment process
- Align corpus manually
- Add potential speller test texts
- collect a list of PR recipients
- fix bugs!.