keyboard_arrow_up
Accepted Papers
Glaucoma Detection and Classification Using Image Processing with Different Convnet Architectures
Vaibhav Khandelwal1 and Dr. Ajay Kumar2, 1Department of Information Technology, ABV-IIITM, Gwalior, Madhya Pradesh 2Associate Professor, ABV-IIITM, Gwalior, Madhya Pradesh
ABSTRACT

Automatic detection of diseases by the use of machine learning techniques is still an unexplored field of research. Such innovations and procedures can help improve medical practices and refine health care systems all over the world. Glaucoma is a permanent eye disease that leads to vision loss. The cup to disc ratio (CDR) is an essential factor in the screening and diagnosis of glaucoma. Thus, the goal is to develop a model for segmentation of optic disc (OD) and optic cup (OC) from fundus images is a crucial task. Because past existing methods are mostly based on hand-crafted features, which require sufficiently discriminative representations and are easily affected by disordered regions and low contrast quality, hence deep learning systems for Glaucoma Detection is extremely needed. The thesis is aimed to develop a deep learning (DL) architecture with a convolutional neural network for automated glaucoma examination. Deep learning methods, such as convolutional neural networks (CNN), can indicate a hierarchical depiction of images to distinguish between glaucoma and non-glaucoma patterns for diagnostic decisions. The thesis will conclude by comparing the performance of feature-based analysis and neural network-based analysis in terms of accuracy and real-time processing, and thereby evaluate the performance of different ConvNet architecture such as U-Net, M-Net, etc. for the feasibility of multi-class detection.

KEYWORDS

Glaucoma Detection, Image Segmentation, Computer Vision, Optic Disc Segmentation, Optic Cup Segmentation, Convolutional Neural Network, Deep Learning.


Heuristic Reasoning And The Application Of The Concept Of Fuzzy Decision Variables In The Quantitative Risk Analysis Of Construction Projects In Nigeria
Ibrahim Yakubu, Abubakar Tafawa Balewa University, Nigeria
ABSTRACT

The study has utilized heuristic reasoning and the concept of Fuzzy Decision Variables in order to undertake the risk analysis of a proposed construction project in a selected domain The objectives included determining the sources of risks, obtaining the Fuzzy Decision Variables by deductive reasoning, identifying the types of risks prevailing in the project and the utilization of fuzzy set analysis in order to estimate the possible magnitudes of the risks. Five completed projects were analysed. For each project , a breakdown of the final contract sum into variations, remeasurement of provisional quantities, nominated subcontractors’ accounts, nominated suppliers’ accounts, loss and expense caused by disturbances of regular progress of the works, fluctuation in rates of labour and prices of materials was undertaken in order to derive the sources of risks, the Fuzzy Decision Variables and the subsequent risks. Fuzzy set analysis was used to calculate the possible magnitudes of the risks. Heuristic reasoning and fuzzy set analysis could be used in a composite framework to undertake the risk analysis of a proposed project in a selected domain.

KEYWORDS

Heuristic reasoning, Fuzzy Decision Variables, risk and fuzzy set analysis


An Augmented Intelligence Model To Extract Pragmatic Markers
Vijay Perincherry1 , David White2 and Staci Warden3
1Indiggo Associates, Bethesda, Maryland, USA
2Oteemo Inc, Washington, DC, USA
3Milken Institute, Washington, DC, USA
ABSTRACT

This paper presents a novel methodology for automatically extracting pragmatic markers from large streams of texts and repositories of documents. Pragmatic markers typically are implications, innuendos, suggestions, contradictions, sarcasms or references that are difficult to define objectively, but that are subjectively evident.
Our methodology uses a two-stage augmented learning model applied to a specific use case, extracting from a repository of over 1500 Article IV country reports prepared for government officials by International Monetary Fund (IMF) staff. The model uses principles of evidence theory to train a machine to decipher the textual patterns of suggested actions for government officials and to extract those suggestions from the country reports at scale.
We demonstrate the effectiveness of the model with impressive precision and recall metrics that over time outperform even the human trainers

KEYWORDS

Natural Language Understanding, Augmented Intelligence, Pragmatics, Text Processing


Toward Multi-label classification using an ontology for web page classification
Yaya Traore1 , Sadouanouan Malo2, Bassole Didier1 and Sere Abdoulaye2
1University Joseph KI-ZERBO, Ouagadougou, BURKINA FASO
2University Nazi Boni, Bobo-Dioulasso, BURKINA FASO
ABSTRACT

Automatic categorization of web pages has become more significant to help the search engines to provide users with relevant and quick retrieval results. In this paper, we propose a method based on Multi-label Classification (ML) using an ontology which allows the prediction of the categories of a new web page created and tagged. It uses the ontology in the learning phase as well as in the prediction phase. In the learning phase, the ontology is used to build the training set. In the prediction phase, the ontology is used to place the new pages tagged in the most specific categories. The experiment evaluation demonstrates that our proposal shows the substantial results.

KEYWORDS

Multi-label classification (ML), ontology, categorization, prediction.


menu
Reach Us

emailsigpro@csity2019.org


emailsigprocon@yahoo.com

close