File size: 86,022 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T14:57:57.655925Z"
    },
    "title": "A Lexical Simplification Tool for Promoting Health Literacy",
    "authors": [
        {
            "first": "Leonardo",
            "middle": [],
            "last": "Zilio",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Surrey",
                "location": {
                    "country": "United Kingdom"
                }
            },
            "email": "l.zilio@surrey.ac.uk"
        },
        {
            "first": "Liana",
            "middle": [
                "Braga"
            ],
            "last": "Paraguassu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Federal University of Rio Grande do Sul",
                "location": {
                    "country": "Brazil"
                }
            },
            "email": ""
        },
        {
            "first": "Luis",
            "middle": [
                "Antonio"
            ],
            "last": "Leiva Hercules",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Gabriel",
            "middle": [
                "L"
            ],
            "last": "Ponomarenko",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Federal University of Rio Grande do Sul",
                "location": {
                    "country": "Brazil"
                }
            },
            "email": ""
        },
        {
            "first": "Laura",
            "middle": [
                "P"
            ],
            "last": "Berwanger",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Federal University of Rio Grande do Sul",
                "location": {
                    "country": "Brazil"
                }
            },
            "email": ""
        },
        {
            "first": "Maria",
            "middle": [
                "Jos\u00e9 Bocorny"
            ],
            "last": "Finatto",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Federal University of Rio Grande do Sul",
                "location": {
                    "country": "Brazil"
                }
            },
            "email": "mariafinatto@gmail.com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper presents MedSimples, an authoring tool that combines Natural Language Processing, Corpus Linguistics and Terminology to help writers to convert health-related information into a more accessible version for people with low literacy skills. MedSimples applies parsing methods associated with lexical resources to automatically evaluate a text and present simplification suggestions that are more suitable for the target audience. Using the suggestions provided by the tool, the author can adapt the original text and make it more accessible. The focus of MedSimples lies on texts for special purposes, so that it not only deals with general vocabulary, but also with specialized terms. The tool is currently under development, but an online working prototype exists and can be tested freely. An assessment of MedSimples was carried out aiming at evaluating its current performance with some promising results, especially for informing the future developments that are planned for the tool.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper presents MedSimples, an authoring tool that combines Natural Language Processing, Corpus Linguistics and Terminology to help writers to convert health-related information into a more accessible version for people with low literacy skills. MedSimples applies parsing methods associated with lexical resources to automatically evaluate a text and present simplification suggestions that are more suitable for the target audience. Using the suggestions provided by the tool, the author can adapt the original text and make it more accessible. The focus of MedSimples lies on texts for special purposes, so that it not only deals with general vocabulary, but also with specialized terms. The tool is currently under development, but an online working prototype exists and can be tested freely. An assessment of MedSimples was carried out aiming at evaluating its current performance with some promising results, especially for informing the future developments that are planned for the tool.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Most health professionals in Brazil have no specific or even complementary training in the area of communication. However, when it comes to health-related information, as Cambricoli (2019) points out, based on a study made by Google, 26% of Brazilians have the Internet as their first source to look for information about their own or their relatives' illnesses, which puts Brazil in the number one position in health-related searches on Google and the YouTube. In a scenario like that, it is important to have support for improving health communication and patient understanding, and this is directly related to health literacy. Health literacy is about communication and understanding; it affects how people understand wellness and illness, and participate in health promotion and prevention activities (Osborne, 2005) . Adding to the question of health literacy, Brazil presents a panorama where functional illiteracy 1 rates are critical. According to a recent INAF 2 report (Lima and Catelli Jr, 2018) published by the Paulo Montenegro Institute, 29% of Brazilians (38 million people) with ages ranging from 15 to 64 years old are considered functional illiterates. Also according to this INAF report, only 12% of the Brazilian population at working age can be considered proficient. Even though literacy skills are low on the country, Brazil has perceived a significant increase of Internet access in the past years, and information has become available to a much larger number of people. According to the Brazilian Institute of Geography and Statistics (IBGE) 3 , in 2017, 67% of 1 People are considered functionally illiterate when they cannot use reading, writing, and calculation skills for their own and the community's development.",
                "cite_spans": [
                    {
                        "start": 171,
                        "end": 188,
                        "text": "Cambricoli (2019)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 805,
                        "end": 820,
                        "text": "(Osborne, 2005)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 979,
                        "end": 1006,
                        "text": "(Lima and Catelli Jr, 2018)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "2 INAF is a Brazilian literacy indicator. More information about INAF can be found at: http://www.ipm.org.br/ inaf 3 https://bit.ly/2HBwmND",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "the Brazilian population have access to the Internet, as opposed to less than half of the population in 2013.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "As it is now, the Brazilian scenario shows a considerable number of people looking for health-related information on the Internet, while only a small percentage of the population can be considered proficient. Adding to that, health professionals don't usually receive the necessary training for providing information that matches the literacy level of a large number of people. In this scenario, a tool that aims at making information more accessible to different audience profiles and that respects the choices of a specialized writer can provide a relevant service both for professionals in charge of communication and for the society in general. MedSimples 4 was conceived for supporting the involvement of health professionals and health communication professionals and for helping them to write information that can be understood by a large part of the population. It is a tool that was designed to help professionals in the task of improving the communication of health-related information to lay people that have low literacy skills. In that way, MedSimples works as a text simplification tool that highlights lexical items and offers suggestions that could improve the accessibility of a health-related text for the Brazilian population. The project is currently focused on the Parkinson's disease domain, and in this paper our aim is to conduct an initial evaluation of the tool, so that we can draw some considerations for its future improvements, especially bearing in mind that the current working structure of MedSimples will be later adjusted for other topics from the Health Sciences. This paper is divided as follows: Section 2 presents information about text simplification in general and about the PorSimples project, which deals with text simplification for Portuguese; in Section 3, we present how MedSimples was build, how it works and what are its main features and resources; Section 4 discusses the methodology we applied for evaluating MedSimples and presents its results; in Section 5 we further discuss the evaluation by presenting some data from an error analysis; finally, Section 6 reports on the main findings of this paper and discusses future improvements and changes to the online tool.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "There are several studies regarding text simplification in general and regarding areas that are directly related to text simplification, such as readability assessment (e.g. Vajjala and Meurers (2014) , complex word identification (e.g. Wilkens et al. (2014) ), intralingual translation (e.g. Rossetti (2019)). However, in this section, we will first focus on briefly introducing the task of text simplification in general, presenting different levels of simplification, and proceed to describe some more applied related work that was developed in the form of a tool that deals with the task of simplifying texts written in Portuguese.",
                "cite_spans": [
                    {
                        "start": 174,
                        "end": 200,
                        "text": "Vajjala and Meurers (2014)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 237,
                        "end": 258,
                        "text": "Wilkens et al. (2014)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2."
            },
            {
                "text": "In Natural Language Processing, the text simplification task focuses on rewriting a text, adding complementary information (e.g. definitions), and/or discarding irrelevant information for minimizing the text's complexity, but all the while trying to assure that the meaning of the simplified text be not greatly altered, and that the new, rewritten version seem natural and fluid for the reader (Siddharthan, 2002; Siddharthan, 2014; Paetzold and Specia, 2015) . This simplification usually occurs by replacing complex words or phrases with simpler ones, in what is called lexical simplification, and/or by modifying the text syntactical structure to render it more simple, which is called a syntactical simplification. Different types of simplification architectures have been proposed (e.g. Siddharthan (2002; Gasperin et al. (2009; Coster and Kauchak (2011; Paetzold and Specia (2015) ), dealing with either or both levels of simplification, generally going from the syntactical level to the lexical level. In this paper, we are focusing on the lexical level, following the bases described by Saggion (2017) . MedSimples addresses words, phrases and terms that may be complex for people with low literacy and presents simpler suggestions or term explanations. However, it is important to point out that MedSimples does not focus on trying to automatically replace complex phrases. It is designed to help communicators of health-related information to write more simplified texts. As such, it only presents suggestions of changes, in the form of simpler words or term explanations, that may or may not be accepted by the author of the text.",
                "cite_spans": [
                    {
                        "start": 395,
                        "end": 414,
                        "text": "(Siddharthan, 2002;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 415,
                        "end": 433,
                        "text": "Siddharthan, 2014;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 434,
                        "end": 460,
                        "text": "Paetzold and Specia, 2015)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 793,
                        "end": 811,
                        "text": "Siddharthan (2002;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 812,
                        "end": 834,
                        "text": "Gasperin et al. (2009;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 835,
                        "end": 860,
                        "text": "Coster and Kauchak (2011;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 861,
                        "end": 887,
                        "text": "Paetzold and Specia (2015)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1096,
                        "end": 1110,
                        "text": "Saggion (2017)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Text Simplification",
                "sec_num": "2.1."
            },
            {
                "text": "For Portuguese, there are studies focusing on the classification of complex texts, such as Wagner Filho et al. 2016, and Gazzola et al. (2019) , and others that aim at evaluating sentence complexity, such as Leal et al. (2019) . However, for the purposes of text simplification, i.e., identifying complex structures of a text and suggesting simpler replacement structures, in the way that we are looking for in Med-Simples, project PorSimples (Alu\u00edsio et al., 2008; Alu\u00edsio and Gasperin, 2010) is the one that currently exists with the most similarities. The project PorSimples deals with the challenges of text simplification and has an online tool called Simplifica (Scarton et al., 2010) that helps authors to write simpler texts. Simplifica uses lexical resources allied with automatically extracted features to identify complex parts of a text and make suggestions on how to make it more readable for people with low literacy. It presents a module for lexical simplification and another module for syntactical simplification, allowing for some customization in terms of which resources are used and which types of syntactical structures are target of the simplification. While Simplifica serves as an interesting model as a simplification authoring tool, it focuses on the general language, and, as such, it usually cannot suggest befitting simplifications for specialized terms, and this is where the main strength of MedSimples lies. By drawing on specialized resources, MedSimples aims at focusing on different areas of the human knowledge for providing more suitable suggestions for simplifications, and, by aiming at health-related texts, it addresses a widely recognized issue for text simplification (Rossetti, 2019) .",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 142,
                        "text": "Gazzola et al. (2019)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 208,
                        "end": 226,
                        "text": "Leal et al. (2019)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 443,
                        "end": 465,
                        "text": "(Alu\u00edsio et al., 2008;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 466,
                        "end": 493,
                        "text": "Alu\u00edsio and Gasperin, 2010)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 668,
                        "end": 690,
                        "text": "(Scarton et al., 2010)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 1712,
                        "end": 1728,
                        "text": "(Rossetti, 2019)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Simplification for Portuguese",
                "sec_num": "2.2."
            },
            {
                "text": "MedSimples relies on different corpora and lexical resources, and uses a parsing system at its core. By combining these resources, it can identify complex words and present suggestions for lexical simplification. In this section, we first discuss the lexical resources that were created for MedSimples and then present the pipeline.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Description",
                "sec_num": "3."
            },
            {
                "text": "One of the challenges of text simplification is to identify what kind of vocabulary could be complex to the target audience and try to suggest simpler replacement words or definitions. At this stage of the project, MedSimples deals with the specialized, health-related area of Parkinson's disease 5 , so it has to identify not only phrases that are complex from the point of view of the general language, but also terms. It also has to treat complex phrases and terms differently, because offering a simpler lexical suggestion for a term may not help for preserving approximately the same semantic content for the reader, which could lead to serious consequences in a text with information about a health-related subject. For deciding what should be considered as a complex phrase, we decided to look at the problem from a different perspective. By relying on CorPop (Pasqualini, 2018; Pasqualini and Finatto, 2018) , a corpus composed of texts that were written for and/or by people with low literacy skills, we were able to estimate which words could be considered simple for our target audience. The corpus was tagged using the PassPort parser (Zilio et al., 2018) , and a frequency-ranked word list was generated considering both lemma and part of speech. From this word list, we selected all words with frequency of five or more to be part of our list of simple words. CorPop is a small corpus, containing around 740k tokens and 24k lemmas associated to different word classes, but it was positively evaluated in terms of adequacy for people with low literacy, so we considered that even a low frequency such as five would be enough to warrant the status of simple word to a lemma that is present in this corpus, this led to a list of almost 7k lemmas (associated to the respective word class). We used this list from CorPop to then filter the Thesaurus of Portuguese (TeP) 2.0 (Maziero and Pardo, 2008) and generate a list of complex words with simpler synonyms. TeP is a language resource that contains WordNet-like synsets for Portuguese. We automatically analyzed each synset and set complex words (i.e. those which were not in the Cor-Pop list of simple words) as entries, while the other words in the synset that were present in our list of simple words were set as simpler synonyms. This list of complex words with simpler synonyms contains more than 15k entries, and also includes some multiword structures, such as a favor [in favor], ab\u00f3bada celeste [celestial dome], curriculum vitae, de s\u00fabito [suddenly] . In addition to the list of complex words with simpler synonyms generated from TeP and the list of simple words extracted from CorPop, MedSimples also relies on a list of terms related to Parkinson's disease. This list is still in the process of being completed and simplified, for achieving definitions that are suitable for our target audience. It is being manually built by linguists and also manually validated by a specialist in Medicine 6 . These three lexical resources are used for the automatic process of complex word identification and suggestion of simplifications, as we explain in the next subsection. Table  1 shows the precise numbers of items in each of them.",
                "cite_spans": [
                    {
                        "start": 867,
                        "end": 885,
                        "text": "(Pasqualini, 2018;",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 886,
                        "end": 915,
                        "text": "Pasqualini and Finatto, 2018)",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 1147,
                        "end": 1167,
                        "text": "(Zilio et al., 2018)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 1883,
                        "end": 1908,
                        "text": "(Maziero and Pardo, 2008)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 2511,
                        "end": 2521,
                        "text": "[suddenly]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3139,
                        "end": 3147,
                        "text": "Table  1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Simple Corpus and Lexical Resources",
                "sec_num": "3.1."
            },
            {
                "text": "The MedSimples online tool uses automatic text processing and relies on the PassPort parser (Zilio et al., 2018 first tagging the text that is used as input by the user. It then analyses each sentence by matching the items first to the list of terms, then to the list of simple words and, finally, to the list of complex words. For matching the list of terms, MedSimples uses the surface forms of words, based on the terminological principle that terms can differentiate themselves by their surface realization (Krieger and Finatto, 2004) . Then, it uses the lemma forms to either ignore the word (if it is present in the list of simple words), or to identify it as complex and present a simpler suggestion (if it is present in the complex word list). MedSimples is still under development, but all the steps mentioned above were already implemented, and the system can visually highlight terms and complex words with suggestions in different colors (depending on whether it is a term or complex word). As it is now, the system is only visually flagging words as complex if there are simpler suggestions in our lexical resources, otherwise, they are ignored. This can be modified, and the idea in the future is to be able to annotate as complex also some types of words that are not in the list of complex words, so as to at least indicate their complexity to the user. Here, for the purpose of this evaluation, we wanted the system to only identify complex words for which we have suggestions, so that we could more easily verify how our suggestions were fitting the context. However, this decision also means we are not currently presenting all the info that we can, and this is reflected in the evaluation process, as will be seen in the next section. This same approach was not used for terms, which we are marking as recognized even if we don't yet have a definition for them. We took this different approach for each type of automatic annotation because the list of terms is much smaller than the number of out-of-vocabulary words, and we expect to have definitions in place for them in the foreseeable future. Figure 1 shows how the system is currently presenting the information about terms and complex phrases. As explained above, this presentation was chosen to speed up the current evaluation, but, in the future, the suggestions will be shown in a different way, in order to not pollute the text for the user.",
                "cite_spans": [
                    {
                        "start": 92,
                        "end": 111,
                        "text": "(Zilio et al., 2018",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 511,
                        "end": 538,
                        "text": "(Krieger and Finatto, 2004)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 2117,
                        "end": 2125,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Identification and Suggestions",
                "sec_num": "3.2."
            },
            {
                "text": "In this paper, one of our aims is to measure how MedSimples is performing in its current state, and what areas should be the focus of our next efforts. To that end, we designed a strict evaluation using a gold standard that was created using authentic online material. In the next subsections, we discuss the creation of the gold standard, then explain the evaluation methodology and, finally, present the results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4."
            },
            {
                "text": "The first step for creating a gold standard for the evaluation of MedSimples was to create a corpus with texts related to the Parkinson's disease domain. To achieve this, we crawled the web using trigram-combinations of 7 terms related to the target domain: \"doen\u00e7a de Parkinson\" [Parkinson's disease], \"Parkinson\", \"mal de Parkinson\" [alternative denomination for Parkinson's disease 7 ], \"cuidador\" (Rieder et al., 2016) . We used slate3k 8 to scrape PDF documents and jusText 9 to exclude boilerplate and non-interesting content. We also made sure to only scrape content from different Websites, by not repeating previously scraped URLs.",
                "cite_spans": [
                    {
                        "start": 401,
                        "end": 422,
                        "text": "(Rieder et al., 2016)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 479,
                        "end": 480,
                        "text": "9",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Gold Standard",
                "sec_num": "4.1."
            },
            {
                "text": "From the resulting crawled corpus, we created 8 random samples of 120 medium-to-long sentences 10 each and distributed them to 8 annotators 11 . Each sample had 30 sentences that were annotated by all annotators and 90 sentences that were annotated only by each individual annotator, totaling 750 sentences. Annotators were asked to annotate any word, phrase or term that they deemed to be complex or terminological, making an explicit distinction between terms and complex phrases. The result of the annotation was then analysed in terms of a nization, because it can cause discrimination or prejudice. Still it can easily appear in online texts about the subject of Parkinson's disease, so we decided to include it as well. 8 https://pypi.org/project/slate3k/ 9 http://corpus.tools/wiki/Justext 10 Each sentence in the gold standard has a minimum of 15 space-separated tokens.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Gold Standard",
                "sec_num": "4.1."
            },
            {
                "text": "11 All annotators are linguists or undergraduate students of Linguistics. Some of the authors also contributed as annotators. pairwise Cohen's kappa inter-annotator agreement (Cohen, 1960) by using the agreement verified on the 30 sentences that were annotated by all. Since it was a free-flow annotation, in which any part of a sentence could be selected for annotation and there was also a classification task (complex phrase or term) on top of it, this can be considered a very complicated task, so we did not expect to achieve high levels of kappa, but we set .20 as a bare minimum. After calculating the agreement (Table 2) , two annotated samples were excluded from the gold standard for not achieving a minimum mean kappa score of 0.20. The final Fleiss' kappa score (Fleiss, 1971) for the remaining annotators' samples was 0.25. This filtering process generated a final gold standard with 570 annotated sentences, and 2080 annotated instances. These final instances were thoroughly checked for inconsistencies (errors resulting from the manual annotation) by one of the authors.",
                "cite_spans": [
                    {
                        "start": 175,
                        "end": 188,
                        "text": "(Cohen, 1960)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 774,
                        "end": 788,
                        "text": "(Fleiss, 1971)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 619,
                        "end": 628,
                        "text": "(Table 2)",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Gold Standard",
                "sec_num": "4.1."
            },
            {
                "text": "Having a gold standard for the evaluation, we randomized the sentences in it and divided all the instances among the authors for evaluation. Since the evaluation was a somewhat more straightforward process, we did not duplicate sentences for calculating the agreement on the evaluation process (as we did for the generation of the gold standard). Some of the gold standard annotators worked as evaluators as well. For the evaluation, we asked evaluators to check three aspects of the automatic annotation: first, if the word or phrase was recognized as complex or as a term; second, if it was correctly recognized as either term or difficult phrase; and, third, to check if the suggestion semantically fitted the context 12 . For the evaluation of the semantic and the recognition task, there was an option for a partial match 13 . In order to simplify the process for the human evaluators, we did not further divide the classification of the partially recognized instances into mismatch for term or complex phrases. In addition to the recognition and the semantic evaluation, in cases where MedSimples failed to recognize the target phrase (either no recognition or only partial recognition), evaluators were asked to proceed with an error analysis, by checking if there were no typos (such as numbers attached at the beginning or end of an instance, spelling errors, etc.), foreign words 14 or unrelated terms 15 . The phrases on the gold standard were also compared with the words on the list of simple words to see if there were any matches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "4.2."
            },
            {
                "text": "As we explained in the previous sections, we used a hard test to see how MedSimples is currently performing, especially because the aim of this study was to look for points in which we need to improve in the future. As shown on Table  3 , one of the negative results that we got from this evaluation is that MedSimples currently does not achieve a good coverage. From all the instances, 67.88% were not taken into account for simplification in any way. However, there is also positive information coming from these results: for all the instances that were correctly recognized, MedSimples provided the correct meaning on 67.04% of the cases (with a slightly better performance for terms, as expected, which have their suggestions coming from a handcrafted glossary). When there was a partial recognition of an instance (which could only happen for multiword instances) or a mismatch, we see that MedSimples struggles to provide a suggestion that fits the context. This is especially true in the case of mismatches, where the number of suggestions that do not fit the context (bad suggestions) is 3.5 times higher than the number of good suggestions. By further analyzing the partially recognized instances, we see that the vast majority of unfitting suggestions come from our list of complex words (the one that was automatically created using TeP (Maziero and Pardo, 2008) and CorPop (Pasqualini, 2018) ).",
                "cite_spans": [
                    {
                        "start": 1348,
                        "end": 1373,
                        "text": "(Maziero and Pardo, 2008)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 1385,
                        "end": 1403,
                        "text": "(Pasqualini, 2018)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 228,
                        "end": 236,
                        "text": "Table  3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3."
            },
            {
                "text": "After looking at the results, especially the ones from unrecognized and partially recognized instances, we can look at an error analysis to better understand what was missing. Table 4 shows information about out-of-scope terms (i.e. terms that do not belong to the area of Parkinson's Disease), foreign words present on the target instances, and typos. The number of out-of-scope terms accounted for 13.05% of the terms that were not recognized by the tool (counting also the ones that were partially recognized or mismatch). The number of foreign words and typos, on the other hand, are almost negligible, accounting for only 4.67% of the unrecognized instances. As a second part of this error analysis, we looked at our own list of words that are assumed to be simple (this is the list of words that was extracted from CorPop, which was already tested by Pasqualini (2018) in terms of complexity) and matched it against instances that were considered as complex phrases by the annotators. In total, we found out that 393 instances that were not recognized in any form contained words that were in our list of simple words, this accounts for 55.11% of the unrecognized complex phrases in the evaluation. This comparison revealed a complicated, but expected (as pointed out by Cabr\u00e9 (1993) , Krieger and Finatto (2004) ), aspect of the lexical simplification: there are words or phrases with a generally simple meaning that can have a complex meaning in specific contexts (for instance, \"administra\u00e7\u00e3o\" [administration] in general has a fairly simple meaning, but in the context of \"administration of medicines to patients\", it takes a more complex meaning). However, by looking further into this comparison, it also Terms  125  47  0  47  87  22  10  35  4  699 1076  Complex phrases  172  73  26  5  11  4  0  0  0  713 1004  Total  297 120  26  52  98  26  10  35  4  1412 2080   Table 3 : Evaluation results. The labels \"Good\", \"Bad\" and \"Partial\" reflect the evaluation of the meaning of MedSimples' suggestions in the given context. , \"interferir\" [to interfere], and \"promover\" [to promote] were annotated as complex, even if the context in which they appear does not imply a more complex meaning). This observation requires some further analyses that we haven't yet carried out, to better estimate what could be considered to be included in our current lexical resources and what can be viewed as an overestimation of complexity from the annotation. The case of words that assume a more complex meaning in context is the one that poses an interesting challenge for MedSimples. Since we are currently not using any type of disambiguation, we have no way of distinguishing between the \"administration of a business\" and the \"administration of medicines\", and this should be a matter to take into account for the future steps of the tool.",
                "cite_spans": [
                    {
                        "start": 857,
                        "end": 874,
                        "text": "Pasqualini (2018)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 1277,
                        "end": 1289,
                        "text": "Cabr\u00e9 (1993)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1292,
                        "end": 1318,
                        "text": "Krieger and Finatto (2004)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 176,
                        "end": 183,
                        "text": "Table 4",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 1717,
                        "end": 1890,
                        "text": "Terms  125  47  0  47  87  22  10  35  4  699 1076  Complex phrases  172  73  26  5  11  4  0  0  0  713 1004  Total  297 120  26  52  98  26  10  35  4  1412 2080   Table 3",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5."
            },
            {
                "text": "In this paper we presented MedSimples, an authoring tool that is mainly focused on helping producers of content from the healthcare industry to provide more accessible texts to Brazilian people with low literacy. MedSimples is currently under development, but has a working online prototype for testing. By accessing the Website, a user can input a text and, after having selected the domain and type of target reader and submitting it for processing, receive suggestions of simpler words or definitions for terms that could be taken into consideration for formulating a more accessible text. In order to expand MedSimples, an evaluation was developed to assess the current state of the system and to provide useful information for the steps going forward. One of the results of the evaluation was that MedSimples is still lacking in terms of good suggestions that would fit the context of a text dealing with Parkinson's disease. That is one of the reason's why the list of complex words and simple suggestions is going to be target of a major review, that intends on checking for entries that are not very helpful and trying to provide suggestions that would potentially present a better fit for the specialized context, considering meanings that would be more in line with the domain. This evaluation also presented some interesting information for expanding Med-Simples' term base, which currently contains almost 450",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Final Thoughts and Future Work",
                "sec_num": "6."
            },
            {
                "text": "terms, but that could be expanded to have a broader coverage of the area, possibly including terms that are not directly linked to the Parkinson's disease, but that deals with more general terminology of the healthcare area. Going forward, we have several improvements planned for the tool. Along with the changes planned for the lists of terms and of complex words explained above, we are also studying, for instance, the possibility of expanding the identification of complex words to some of those for which we currently don't have a simpler suggestion, for it might help the user to identify possible challenges for their target audience. The changes are not only planned for the backend, but also for the interface. By presenting a more visually appealing interface (for instance, without the presentation of suggestions within the text), the tool can be made more suitable for helping health professionals and communicators of the health industry in their tasks of writing texts for people with low literacy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Final Thoughts and Future Work",
                "sec_num": "6."
            },
            {
                "text": "Freely available at: http://www.ufrgs.br/ textecc/acessibilidade/page/cartilha/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The inclusion of other health-related areas are already in development.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "\"Mal de Parkinson\" is an alternative denomination for which the use is currently not recommended by the World Health Orga-",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In those cases where the suggestion was a whole synset, only one of the suggested replacement words should fit to be considered a good suggestion. This decision take into consideration that we rely on the user to decide which one of the suggested replacement words would fit the context.13 For instance, if only part of a term was identified or if a suggestion of simplification would only partially fit in the context.14 Since we are using lexical resources for the Brazilian Portuguese variant, the evaluators were instructed to mark European Portuguese variants as foreign words as well.15 Since the corpus was crawled from the internet, there is always the possibility of having sentences that do not belong to the Parkinson's disease domain, even if the keywords used were heavily linked to the domain.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The authors would like to thank these funding sources that contributed to this research: Expanding Excellence in England (E3), Google LARA Award 2019, PIBIC-PROPESQ-UFRGS, CNPq, and CAPES.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": "7."
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Fostering digital inclusion and accessibility: the porsimples project for simplification of portuguese texts",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Alu\u00edsio",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Gasperin",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas",
                "volume": "",
                "issue": "",
                "pages": "46--53",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alu\u00edsio, S. M. and Gasperin, C. (2010). Fostering dig- ital inclusion and accessibility: the porsimples project for simplification of portuguese texts. In Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Amer- icas, pages 46-53. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Towards brazilian portuguese automatic text simplification systems",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Alu\u00edsio",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Specia",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "A"
                        ],
                        "last": "Pardo",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [
                            "G"
                        ],
                        "last": "Maziero",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fortes",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the eighth ACM symposium on Document engineering",
                "volume": "",
                "issue": "",
                "pages": "240--248",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alu\u00edsio, S. M., Specia, L., Pardo, T. A., Maziero, E. G., and Fortes, R. P. (2008). Towards brazilian portuguese automatic text simplification systems. In Proceedings of the eighth ACM symposium on Document engineering, pages 240-248.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "La terminolog\u00eda: teor\u00eda, metodolog\u00eda, aplicaciones",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "T"
                        ],
                        "last": "Cabr\u00e9",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cabr\u00e9, M. T. (1993). La terminolog\u00eda: teor\u00eda, metodolog\u00eda, aplicaciones. Ant\u00e1rtida/Emp\u00faries.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Brasil lidera aumento das pesquisas por temas de sa",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Cambricoli",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cambricoli, F. (2019). Brasil lidera aumento das pesquisas por temas de sa[ude no google.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A coefficient of agreement for nominal scales",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Cohen",
                        "suffix": ""
                    }
                ],
                "year": 1960,
                "venue": "Educational and psychological measurement",
                "volume": "20",
                "issue": "1",
                "pages": "37--46",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cohen, J. (1960). A coefficient of agreement for nomi- nal scales. Educational and psychological measurement, 20(1):37-46.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Learning to Simplify Sentences Using Wikipedia",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Coster",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Kauchak",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of Text-To-Text Generation, ACL Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Coster, W. and Kauchak, D. (2011). Learning to Simplify Sentences Using Wikipedia. In Proceedings of Text-To- Text Generation, ACL Workshop.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Measuring nominal scale agreement among many raters",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "L"
                        ],
                        "last": "Fleiss",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "Psychological bulletin",
                "volume": "76",
                "issue": "5",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Natural language processing for social inclusion: a text simplification architecture for different literacy levels",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Gasperin",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Maziero",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Specia",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "A"
                        ],
                        "last": "Pardo",
                        "suffix": ""
                    },
                    {
                        "first": "Alu\u00edsio",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proc. of SEMISH-XXXVI Semin\u00e1rio Integrado de Software e Hardware",
                "volume": "",
                "issue": "",
                "pages": "387--401",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gasperin, C., Maziero, E., Specia, L., Pardo, T. A., and Alu\u00edsio, S. M. (2009). Natural language processing for social inclusion: a text simplification architecture for different literacy levels. Proc. of SEMISH-XXXVI Semin\u00e1rio Integrado de Software e Hardware, pages 387-401.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Predi\u00e7 ao da complexidade textual de recursos educacionais abertos em portugu\u00eas",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Gazzola",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "E"
                        ],
                        "last": "Leal",
                        "suffix": ""
                    },
                    {
                        "first": "Aluisio",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gazzola, M., Leal, S. E., and Aluisio, S. M. (2019). Predi\u00e7 ao da complexidade textual de recursos educacionais abertos em portugu\u00eas.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Introdu\u00e7\u00e3\u00f2 a terminologia: teoria e pr\u00e1tica",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "D G"
                        ],
                        "last": "Krieger",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "J B"
                        ],
                        "last": "Finatto",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Krieger, M. d. G. and Finatto, M. J. B. (2004). Introdu\u00e7\u00e3\u00f2 a terminologia: teoria e pr\u00e1tica. Editora Contexto.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Avalia\u00e7\u00e3o autom\u00e1tica da complexidade de senten\u00e7as do portugu\u00eas brasileiro para o dom\u00ednio rural",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "E"
                        ],
                        "last": "Leal",
                        "suffix": ""
                    },
                    {
                        "first": "Magalhaes",
                        "middle": [],
                        "last": "De",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Duran",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Alu\u00edsio",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Embrapa Gado de Leite-Artigo em anais de congresso",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Leal, S. E., de MAGALHAES, V., Duran, M. S., and Alu\u00edsio, S. M. (2019). Avalia\u00e7\u00e3o autom\u00e1tica da com- plexidade de senten\u00e7as do portugu\u00eas brasileiro para o dom\u00ednio rural. In Embrapa Gado de Leite-Artigo em anais de congresso (ALICE).",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Inaf brasil 2018: Resultados preliminares. a\u00e7\u00e3o educativa/instituto paulo montenegro",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Lima",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Catelli",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lima, A. and Catelli Jr, R. (2018). Inaf brasil 2018: Resul- tados preliminares. a\u00e7\u00e3o educativa/instituto paulo mon- tenegro, 2018.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Health Literacy from A to Z. Practical Ways to Communicate Your Health Message",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Osborne",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Osborne, H. (2005). Health Literacy from A to Z. Practical Ways to Communicate Your Health Message. Jones and Bartlett Publishers.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Lexenstein: A framework for lexical simplification",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "H"
                        ],
                        "last": "Paetzold",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Specia",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "ACL-IJCNLP",
                "volume": "2015",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Paetzold, G. H. and Specia, L. (2015). Lexenstein: A framework for lexical simplification. ACL-IJCNLP 2015, 1(1):85.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Entendendo a doen\u00e7a de parkinson: Informa\u00e7\u00f5es para pacientes, familiares e cuidadores. Aspectos Cognitivos na Doen\u00e7a de Parkinson",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "R M"
                        ],
                        "last": "Rieder",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Chardosim",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Terra",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Gonzatti",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "97--104",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rieder, C. R. M., Chardosim, N., Terra, N., and Gonzatti, V. (2016). Entendendo a doen\u00e7a de parkinson: Informa\u00e7\u00f5es para pacientes, familiares e cuidadores. Aspectos Cogni- tivos na Doen\u00e7a de Parkinson. Porto Alegre, RS: EDIPU- CRS, 2016:97-104.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Intralingual translation and cascading crises. Translation in Cascading Crises",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Rossetti",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rossetti, A. (2019). Intralingual translation and cascading crises. Translation in Cascading Crises.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Automatic text simplification: Synthesis lectures on human language technologies",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Saggion",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "10",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Saggion, H. (2017). Automatic text simplification: Syn- thesis lectures on human language technologies, vol. 10 (1). California, Morgan & Claypool Publishers.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Simplifica: a tool for authoring simplified texts in brazilian portuguese guided by readability assessments",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Scarton",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Oliveira",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Candido",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Gasperin",
                        "suffix": ""
                    },
                    {
                        "first": "Alu\u00edsio",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the NAACL HLT 2010 Demonstration Session",
                "volume": "",
                "issue": "",
                "pages": "41--44",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Scarton, C., Oliveira, M., Candido Jr, A., Gasperin, C., and Alu\u00edsio, S. (2010). Simplifica: a tool for authoring sim- plified texts in brazilian portuguese guided by readability assessments. In Proceedings of the NAACL HLT 2010 Demonstration Session, pages 41-44.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "An architecture for a text simplification system",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Siddharthan",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Language Engineering Conference",
                "volume": "",
                "issue": "",
                "pages": "64--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Siddharthan, A. (2002). An architecture for a text simpli- fication system. In Language Engineering Conference, pages 64-71.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A survey of research on text simplification",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Siddharthan",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "ITL-International Journal of Applied Linguistics. Special Issue on Readability and Text Simplification",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Siddharthan, A. (2014). A survey of research on text sim- plification. ITL-International Journal of Applied Lin- guistics. Special Issue on Readability and Text Simpli- fication. Peeters Publishers, Belgium.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Exploring measures of \"readability\" for spoken language: Analyzing linguistic features of subtitles to identify age-specific tv programs",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Vajjala",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Meurers",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR) at EACL",
                "volume": "14",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vajjala, S. and Meurers, D. (2014). Exploring measures of \"readability\" for spoken language: Analyzing linguistic features of subtitles to identify age-specific tv programs. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Popula- tions (PITR) at EACL, volume 14.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Crawling by readability level",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "A"
                        ],
                        "last": "Wagner Filho",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Wilkens",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Zilio",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Idiart",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Villavicencio",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "International Conference on Computational Processing of the Portuguese Language",
                "volume": "",
                "issue": "",
                "pages": "306--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wagner Filho, J. A., Wilkens, R., Zilio, L., Idiart, M., and Villavicencio, A. (2016). Crawling by readabil- ity level. In International Conference on Computational Processing of the Portuguese Language, pages 306-318. Springer.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Size does not matter. frequency does. a study of features for measuring lexical complexity",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Wilkens",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Dalla Vecchia",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "Z"
                        ],
                        "last": "Boito",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Padr\u00f3",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Villavicencio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Advances in Artificial Intelligence-IBERAMIA 2014",
                "volume": "",
                "issue": "",
                "pages": "129--140",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wilkens, R., Dalla Vecchia, A., Boito, M. Z., Padr\u00f3, M., and Villavicencio, A. (2014). Size does not matter. fre- quency does. a study of features for measuring lexi- cal complexity. In Advances in Artificial Intelligence- IBERAMIA 2014, pages 129-140. Springer.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Passport: A dependency parsing model for portuguese",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Zilio",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Wilkens",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Fairon",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "International Conference on Computational Processing of the Portuguese Language",
                "volume": "",
                "issue": "",
                "pages": "479--489",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zilio, L., Wilkens, R., and Fairon, C. (2018). Passport: A dependency parsing model for portuguese. In Interna- tional Conference on Computational Processing of the Portuguese Language, pages 479-489. Springer.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Interface de acesso ao tep 2.0-thesaurus para o portugu\u00eas do brasil",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Maziero",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Pardo",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maziero, E. and Pardo, T. (2008). Interface de acesso ao tep 2.0-thesaurus para o portugu\u00eas do brasil. Relat\u00f3rio t\u00e9cnico. University of Sao Paulo.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Corpop: a corpus of popular brazilian portuguese",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Pasqualini",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "J B"
                        ],
                        "last": "Finatto",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Latin American and Iberian Languages Open Corpora Forum -Open-Cor",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pasqualini, B. and Finatto, M. J. B. (2018). Corpop: a cor- pus of popular brazilian portuguese. In Latin American and Iberian Languages Open Corpora Forum -Open- Cor.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "CorPop: um corpus de refer\u00eancia do portugu\u00eas popular escrito do Brasil",
                "authors": [
                    {
                        "first": "B",
                        "middle": [
                            "F"
                        ],
                        "last": "Pasqualini",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pasqualini, B. F. (2018). CorPop: um corpus de refer\u00eancia do portugu\u00eas popular escrito do Brasil. Ph.D. thesis, Universidade Federal do Rio Grande do Sul.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "Suggestions of simplifications for a text excerpt about the Parkinson's disease on MedSimples. Source: https: //pt.wikipedia.org/wiki/Doen%C3%A7a_de_Parkinson [caretaker], \"DP\" [acronym for Parkinson's disease], \"sintoma motor\" [motor symptom], and \"qualidade de vida\" [quality of life]. These terms were manually selected based on word and n-grams lists extracted from the book Entendendo a Doen\u00e7a de Parkinson [Understanding Parkinson's Disease]",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "TABREF0": {
                "html": null,
                "content": "<table><tr><td>Resource</td><td>Source</td><td># of Items</td></tr><tr><td>List of simple words</td><td>CorPop</td><td>6,881</td></tr><tr><td>List of complex words</td><td>TeP</td><td>15,427</td></tr><tr><td>List of terms</td><td>Handcrafted + Validation</td><td>439</td></tr><tr><td colspan=\"3\">Table 1: Lexical resources used by MedSimples for iden-</td></tr><tr><td colspan=\"3\">tifying complex lexical items and suggesting simpler alter-</td></tr><tr><td>natives.</td><td/><td/></tr></table>",
                "type_str": "table",
                "text": "For instance, it is possible to substitute the word involunt\u00e1rio [involuntary] with inconsciente [unconscious] without much semantic difference. However, substituting the term dopamina [dopamine] with a simplified version would render the information much less precise, and this could have serious, life-impacting consequences. Considering this different treatment for complex phrases and terms, MedSimples relies on two lexical resources: a list with simpler suggestions for complex phrases from the general language, and a list of simpler definitions for terms (and, when possible, simpler lexical variants).",
                "num": null
            },
            "TABREF2": {
                "html": null,
                "content": "<table><tr><td/><td>A1</td><td>A2</td><td>A3</td><td>A4</td><td>A5</td><td>A6</td><td>A7</td><td>A8</td></tr><tr><td>A1</td><td colspan=\"8\">1.0000 0.3828 0.4292 0.3823 0.3355 0.4725 0.2259 0.0765</td></tr><tr><td>A2</td><td colspan=\"8\">0.3828 1.0000 0.3568 0.2982 0.2290 0.3534 0.2389 0.1667</td></tr><tr><td>A3</td><td colspan=\"8\">0.4292 0.3568 1.0000 0.2625 0.3232 0.5775 0.2946 0.0480</td></tr><tr><td>A4</td><td colspan=\"8\">0.3823 0.2982 0.2625 1.0000 0.3854 0.2165 0.1121 0.0465</td></tr><tr><td>A5</td><td colspan=\"8\">0.3355 0.2290 0.3232 0.3854 1.0000 0.2090 0.1390 0.0237</td></tr><tr><td>A6</td><td colspan=\"8\">0.4725 0.3534 0.5775 0.2165 0.2090 1.0000 0.2235 0.1284</td></tr><tr><td>A7</td><td colspan=\"8\">0.2259 0.2389 0.2946 0.1121 0.1390 0.2235 1.0000 0.0734</td></tr><tr><td>A8</td><td colspan=\"8\">0.0765 0.1667 0.0480 0.0465 0.0237 0.1284 0.0734 1.0000</td></tr><tr><td colspan=\"9\">Mean 0.3292 0.2894 0.3274 0.2433 0.2350 0.3115 0.1868 0.0805</td></tr></table>",
                "type_str": "table",
                "text": "Cohen's kappa pairwise agreement among all annotators. The mean scores ignore the lines where annotators are paired with themselves.",
                "num": null
            },
            "TABREF5": {
                "html": null,
                "content": "<table><tr><td>: Error Analysis</td></tr></table>",
                "type_str": "table",
                "text": "",
                "num": null
            }
        }
    }
}