Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
J Med Signals Sens ; 13(2): 73-83, 2023.
Article in English | MEDLINE | ID: mdl-37448539

ABSTRACT

Background and Objective: The endoscopic diagnosis of pathological changes in the gastroesophageal junction including esophagitis and Barrett's mucosa is based on the visual detection of two boundaries: mucosal color change between esophagus and stomach, and top endpoint of gastric folds. The presence and pattern of mucosal breaks in the gastroesophageal mucosal junction (Z line) classify esophagitis in patients and the distance between the two boundaries points to the possible columnar lined epithelium. Since visual detection may suffer from intra- and interobserver variability, our objective was to define the boundaries automatically based on image processing algorithms, which may enable us to measure the detentions of changes in future studies. Methods: To demarcate the Z-line, first the artifacts of endoscopy images are eliminated. In the second step, using SUSAN edge detector, Mahalanobis distance criteria, and Gabor filter bank, an initial contour is estimated for the Z-line. Using region-based active contours, this initial contour converges to the Z-line. Finally, by applying morphological operators and Gabor Filter Bank to the region inside of the Z-line, gastric folds are segmented. Results: To evaluate the results, a database consisting of 50 images and their ground truths were collected. The average dice coefficient and mean square error of Z-line segmentation were 0.93 and 3.3, respectively. Furthermore, the average boundary distance criteria are 12.3 pixels. In addition, two other criteria that compare the segmentation of folds with several ground truths, i.e., Sweet-Spot Coverage and Jaccard Index for Golden Standard, are 0.90 and 0.84, respectively. Conclusions: Considering the results, automatic segmentation of Z-line and gastric folds are matched to the ground truths with appropriate accuracy.

2.
Diagnostics (Basel) ; 13(7)2023 Mar 31.
Article in English | MEDLINE | ID: mdl-37046527

ABSTRACT

This paper aims to present an artificial intelligence-based algorithm for the automated segmentation of Choroidal Neovascularization (CNV) areas and to identify the presence or absence of CNV activity criteria (branching, peripheral arcade, dark halo, shape, loop and anastomoses) in OCTA images. Methods: This retrospective and cross-sectional study includes 130 OCTA images from 101 patients with treatment-naïve CNV. At baseline, OCTA volumes of 6 × 6 mm2 were obtained to develop an AI-based algorithm to evaluate the CNV activity based on five activity criteria, including tiny branching vessels, anastomoses and loops, peripheral arcades, and perilesional hypointense halos. The proposed algorithm comprises two steps. The first block includes the pre-processing and segmentation of CNVs in OCTA images using a modified U-Net network. The second block consists of five binary classification networks, each implemented with various models from scratch, and using transfer learning from pre-trained networks. Results: The proposed segmentation network yielded an averaged Dice coefficient of 0.86. The individual classifiers corresponding to the five activity criteria (branch, peripheral arcade, dark halo, shape, loop, and anastomoses) showed accuracies of 0.84, 0.81, 0.86, 0.85, and 0.82, respectively. The AI-based algorithm potentially allows the reliable detection and segmentation of CNV from OCTA alone, without the need for imaging with contrast agents. The evaluation of the activity criteria in CNV lesions obtains acceptable results, and this algorithm could enable the objective, repeatable assessment of CNV features.

3.
Environ Sci Pollut Res Int ; 30(28): 71849-71863, 2023 Jun.
Article in English | MEDLINE | ID: mdl-35091956

ABSTRACT

Freshwater scarcity, a problem that has arisen particularly as a result of the progressive environmental damage caused by human consumption patterns, is strongly associated with a loss of living quality and a drop in global socioeconomic development. Wastewater treatment is one of the measures being taken to mitigate the current situation. However, the majority of existing treatments employ chemicals that have harmful environmental consequences and low effectiveness and are prohibitively expensive in most countries. Therefore, to increase water supplies, more advanced and cost-effective water treatment technologies are required to be developed for desalination and water reuse purposes. Green technologies have been highlighted as a long-term strategy for conserving natural resources, reducing negative environmental repercussions, and boosting social and economic growth. Thus, a bibliometric technique was applied in this study to identifying prominent green technologies utilised in water and wastewater treatment by analysing scientific publications considering authors, keywords, and countries. To do this, the VOSviewer software and Bibliometrix R Package software were employed. The results of this study revealed that constructed wetlands and photocatalysis are two technologies that have been considered as green technologies applicable to the improvement of water and wastewater treatment processes in most scientific articles.


Subject(s)
Water Purification , Humans , Water Purification/methods , Water Supply , Conservation of Natural Resources/methods , Fresh Water , Technology
4.
Cogn Neurodyn ; 16(6): 1407, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36409166

ABSTRACT

[This corrects the article DOI: 10.1007/s11571-022-09781-7.].

5.
Cogn Neurodyn ; 16(6): 1393-1405, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36408062

ABSTRACT

This paper proposes a new automatic method for spike sorting and tracking non-stationary data based on the Dirichlet Process Mixture (DPM). Data is divided into non-overlapping intervals and mixtures are applied to individual frames rather than to the whole data. In this paper, we have used the information of the previous frame to estimate the cluster parameters of the current interval. Specifically, the means of the clusters in the previous frame are used for estimating the cluster means of the current one, and other parameters are estimated via noninformative priors. The proposed method is capable to track variations in size, shape, or location of clusters as well as detecting the appearance and disappearance of them. We present results in two-dimensional space of first and second principal components (PC1-PC2), but any other feature extraction method leading to the ability of modeling spikes with Normal or t-Student distributions can also be applied. Application of this approach to simulated data and the recordings from anesthetized rat hippocampus confirms its superior performance in comparison to a standard DPM that uses no information from previous frames.

6.
Neural Comput ; 33(5): 1269-1299, 2021 04 13.
Article in English | MEDLINE | ID: mdl-33617745

ABSTRACT

It is of great interest to characterize the spiking activity of individual neurons in a cell ensemble. Many different mechanisms, such as synaptic coupling and the spiking activity of itself and its neighbors, drive a cell's firing properties. Though this is a widely studied modeling problem, there is still room to develop modeling solutions by simplifications embedded in previous models. The first shortcut is that synaptic coupling mechanisms in previous models do not replicate the complex dynamics of the synaptic response. The second is that the number of synaptic connections in these models is an order of magnitude smaller than in an actual neuron. In this research, we push this barrier by incorporating a more accurate model of the synapse and propose a system identification solution that can scale to a network incorporating hundreds of synaptic connections. Although a neuron has hundreds of synaptic connections, only a subset of these connections significantly contributes to its spiking activity. As a result, we assume the synaptic connections are sparse, and to characterize these dynamics, we propose a Bayesian point-process state-space model that lets us incorporate the sparsity of synaptic connections within the regularization technique into our framework. We develop an extended expectation-maximization. algorithm to estimate the free parameters of the proposed model and demonstrate the application of this methodology to the problem of estimating the parameters of many dynamic synaptic connections. We then go through a simulation example consisting of the dynamic synapses across a range of parameter values and show that the model parameters can be estimated using our method. We also show the application of the proposed algorithm in the intracellular data that contains 96 presynaptic connections and assess the estimation accuracy of our method using a combination of goodness-of-fit measures.

7.
Arch Phys Med Rehabil ; 102(7): 1390-1403, 2021 07.
Article in English | MEDLINE | ID: mdl-33484693

ABSTRACT

OBJECTIVES: To examine the adoption of telerehabilitation services from the stakeholders' perspective and to investigate recent advances and future challenges. DATA SOURCES: A systematic review of English articles indexed by PubMed, Thomson Institute of Scientific Information's Web of Science, and Elsevier's Scopus between 1998 and 2020. STUDY SELECTION: The first author (N.N.) screened all titles and abstracts based on the eligibility criteria. Experimental and empirical articles such as randomized and nonrandomized controlled trials, pre-experimental studies, case studies, surveys, feasibility studies, qualitative descriptive studies, and cohort studies were all included in this review. DATA EXTRACTION: The first, second, and fourth authors (N.N., W.I., B.N.) independently extracted data using data fields predefined by the third author (M.B.). The data extracted through this review included study objective, study design, purpose of telerehabilitation, telerehabilitation equipment, patient/sample, age, disease, data collection methods, theory/framework, and adoption themes. DATA SYNTHESIS: A telerehabilitation adoption process model was proposed to highlight the significance of the readiness stage and to classify the primary studies. The articles were classified based on 6 adoption themes, namely users' perception, perspective, and experience; users' satisfaction; users' acceptance and adherence; TeleRehab usability; individual readiness; and users' motivation and awareness. RESULTS: A total of 133 of 914 articles met the eligibility criteria. The majority of papers were randomized controlled trials (27%), followed by surveys (15%). Almost 49% of the papers examined the use of telerehabilitation technology in patients with nervous system problems, 23% examined physical disability disorders, 10% examined cardiovascular diseases, and 8% inspected pulmonary diseases. CONCLUSION: Research on the adoption of telerehabilitation is still in its infancy and needs further attention from researchers working in health care, especially in resource-limited countries. Indeed, studies on the adoption of telerehabilitation are essential to minimize implementation failure, as these studies will help to inform health care personnel and clients about successful adoption strategies.


Subject(s)
Patient Satisfaction , Telerehabilitation/methods , Humans , Stakeholder Participation , Technology
8.
Neural Comput ; 32(11): 2145-2186, 2020 11.
Article in English | MEDLINE | ID: mdl-32946712

ABSTRACT

Marked point process models have recently been used to capture the coding properties of neural populations from multiunit electrophysiological recordings without spike sorting. These clusterless models have been shown in some instances to better describe the firing properties of neural populations than collections of receptive field models for sorted neurons and to lead to better decoding results. To assess their quality, we previously proposed a goodness-of-fit technique for marked point process models based on time rescaling, which for a correct model produces a set of uniform samples over a random region of space. However, assessing uniformity over such a region can be challenging, especially in high dimensions. Here, we propose a set of new transformations in both time and the space of spike waveform features, which generate events that are uniformly distributed in the new mark and time spaces. These transformations are scalable to multidimensional mark spaces and provide uniformly distributed samples in hypercubes, which are well suited for uniformity tests. We discuss the properties of these transformations and demonstrate aspects of model fit captured by each transformation. We also compare multiple uniformity tests to determine their power to identify lack-of-fit in the rescaled data. We demonstrate an application of these transformations and uniformity tests in a simulation study. Proofs for each transformation are provided in the appendix.

9.
J Theor Biol ; 506: 110418, 2020 12 07.
Article in English | MEDLINE | ID: mdl-32738265

ABSTRACT

Nowadays, numerous studies have investigated the modeling of efficient neural encoding processes in the retina of the eye to encode the sensory data. Retina, as the innermost coat of the eye, is the first and the most important area of the visual neural system of mammalians, which is responsible for neural processes. Retina encodes the information of light intensity into a sequence of spikes, and sends them to retinal ganglion cells (RGCs) for further processing. An appropriate retinal encoding model should be adapted to the real retina as much as possible by considering the physiological constraints of the visual pathway to transfer most of the information of the input signal to the brain without too much redundancy of the channel. In this paper, inspired from the existing linear models of retinal encoding process, which have employed input noise and the spatial locality of the RGCs receptive fields (RFs) in the calculation of the encoding matrix, two extra physiological constraints, adapted from the real retina are taken into account so as to achieve a more realistic model for themammalian retina. These new constraints that are the correlation between RGCs and the spatial locality of the photoreceptors' projective fields (PFs), are modeled in a mathematical form and analyzed in detail. To quantify fidelity of the proposed encoding matrix and prove its superiority over existing models, various parameters of the models are calculated and presented in this paper: mean square error between the original and reconstructed image (MSE), the redundancy of the channel, the amount of information transferred through the channel, and the amount of wasted capacity for carrying input noise, to name a few. The results of these calculations show that the proposed model transfers input information with less redundancy of the channel. In other words, it reduces a portion of channel capacity which is wasted for carrying the input noise in comparison to the existing models. Also, due to considering extra physiological constraints in the proposed model, it is acceptable to have a slightly higher amount of MSE value in order to become similar to the real retina.


Subject(s)
Retina , Visual Pathways , Animals , Brain , Photic Stimulation , Retinal Ganglion Cells
10.
ACS Macro Lett ; 9(7): 950-956, 2020 Jul 21.
Article in English | MEDLINE | ID: mdl-35648606

ABSTRACT

In a previous work on a poly(ether ether ketone) (PEEK) melt, above its nominal melting temperature (Tm ≅ 335 °C), a severe Cox-Merz rule failure was observed. The abrupt decrease in the apparent shear viscosity was ascribed to the formation of flow-induced crystallization precursors. Here shear rheology and reflection polariscope experiments are utilized to unravel the structural changes occurring under shear on a similar PEEK melt above Tm. Three regimes of the flow curve were identified from low (0.01 s-1) to high shear rates (1000 s-1): (I) an isotropic structure with weak birefringence due to polymer chain orientation and mild shear thinning for γ̇ < 1 s-1, (II) an isotropic-nematic transition accompanied by strong birefringence, two steady-state viscosities, and large nematic polydomain director fluctuations, and (III) shear-thinning behavior with an η ∼ γ̇-0.5 dependence for γ̇ > 20 s-1, typically found in nematic fluids. The findings reported in this experimental work suggest that the nematic phase may represent the early stage of the formation of shear-induced crystallization precursors.

11.
Soft Matter ; 16(1): 200-207, 2020 Jan 07.
Article in English | MEDLINE | ID: mdl-31774426

ABSTRACT

Dry native cellulose solutions in 1-butyl-3-methylimidazolium methylphosphonate (EMImMPO3H), 1-butyl-3-methylimidazolium acetate (EMImAc), and 1-butyl-3-methylimidazolium chloride (BMImCl) ionic liquids (IL) were investigated using subambient linear viscoelastic oscillatory shear. Glass transition temperatures (Tg) of solutions with various cellulose concentrations up to 8.0 wt% were observed as the peaks of loss tangent tan(δ) and loss modulus G'' in descending temperature sweeps at 1 rad s-1. Cellulose/IL solutions showed a minimum in Tg at ∼2.0 wt% cellulose content before increasing with cellulose concentration, suggesting a perturbation of the strongly structured IL solvents by the cellulose chains. Isothermal frequency sweeps in the vicinity of Tg were used to construct time-temperature-superposition master curves. The angular frequency shift factor aT as a function of temperature indicates Arrhenius behavior within a 9 K range near Tg, allowing calculation of fragility, which was found to be constant up to 8.0 wt% cellulose concentration. This result implied that increasing cellulose concentration initially decreases Tg due to disrupted ionic regularity of ILs, but does not seem to change their fragility.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4732-4735, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30441406

ABSTRACT

The emergence of deep learning techniques has provided new tools for the analysis of complex data in the field of neuroscience. In parallel, advanced statistical approaches like point-process modeling provide powerful tools for analyzing the spiking activity of neural populations. How statistical and machine learning techniques compare when applied to neural data remains largely unclear. In this research, we compare the performance of a point-process filter and a long short-term memory (LSTM) network in decoding the 2D movement trajectory of a rat using the neural activity recorded from an ensemble of hippocampal place cells. We compute the least absolute error (LAE), a measure of accuracy of prediction, and the coefficient of determination (R2), a measure of prediction consistency, to compare the performance of these two methods. We show that the LSTM and point-process filter provide comparable accuracy in predicting the position; however, the point-process provides further information about the prediction which is unavailable for LSTM. Though previous results report better performance using deep learning techniques, our results indicate that this is not universally the case. We also investigate how these techniques encode information carried by place cell activity and compare the computational efficiency of the two methods. While the point-process model is built using the receptive field for each place cell, we show that LSTM does not necessarily encode receptive fields, but instead decodes the movement trajectory using other features of neural activity. Although it is less robust, LSTM runs more than 7 times faster than the fastest point-process filter in this research, providing a strong advantage in computational efficiency. Together, these results suggest that the point-process filters and LSTM approaches each provide distinct advantages; the choice of model should be informed by the specific scientific question of interest.


Subject(s)
Deep Learning , Place Cells , Animals , Movement , Rats
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2362-2365, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440881

ABSTRACT

Biophysical models are widely used to characterize temporal dynamics of the brain networks on different topological and spatial scales. In parallel, the state-space modeling framework with point process observations has been successfully applied in characterizing spiking activity of neuronal ensembles in response to different dynamical covariates. Parameter estimation in biophysical models is generally done heuristically, which hampers their applicability and interpretability. Heuristic parameter estimation becomes an intractable problem when the number of model parameters grows. Here, we propose an algorithm for estimating biophysical model parameters using point-process models and a state-space framework. The framework provides methods for parameter estimation as well as model validation. We demonstrate the application of this methodology to the problem of estimating the parameters of a dynamic synapse model. We generate simulation data for the dynamic synapse across a range of parameters values and assess the estimation accuracy of our method using a combination of goodness-of-fit measures. The proposed methodology can be applied broadly to parameter estimation problems across a broad range of biophysical models, including Hodgkin-Huxley models and network models.


Subject(s)
Algorithms , Neurons/physiology , Synapses/physiology , Humans
14.
IEEE Trans Biomed Eng ; 65(5): 989-1001, 2018 05.
Article in English | MEDLINE | ID: mdl-28783619

ABSTRACT

This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.


Subject(s)
Diabetic Retinopathy/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Macular Edema/diagnostic imaging , Tomography, Optical Coherence/methods , Algorithms , Cysts/diagnostic imaging , Humans , Retina/diagnostic imaging , Sensitivity and Specificity
15.
J Med Signals Sens ; 7(4): 203-212, 2017.
Article in English | MEDLINE | ID: mdl-29204377

ABSTRACT

BACKGROUND: Pulmonary nodules are symptoms of lung cancer. The shape and size of these nodules are used to diagnose lung cancer in computed tomography (CT) images. In the early stages, nodules are very small, and radiologist has to refer to many CT images to diagnose the disease, causing operator mistakes. Image processing algorithms are used as an aid to detect and localize nodules. METHODS: In this paper, a novel lung nodules detection scheme is proposed. First, in the preprocessing stage, our algorithm segments two lung lobes to increase processing speed and accuracy. Second, template-matching is applied to detect the suspicious nodule candidates, including both nodules and some blood vessels. Third, the suspicious nodule candidates are segmented by localized active contours. Finally, the false-positive errors produced by vessels are reduced using some two-/three-dimensional geometrical features in three steps. In these steps, the size, long and short diameters and sphericity are used to decrease the false-positive rate. RESULTS: In the first step, some vessels that are parallel to CT cross-plane are identified. In the second step, oblique vessels are detected using shift of center of gravity in two successive slices. In step three, vessels vertical to CT cross-plane are identified. Using these steps, vessels are separated from nodules. Early Lung Cancer Action Project is used as a popular dataset in this work. CONCLUSIONS: Our algorithm achieved a sensitivity of 90.1% and a specificity of 92.8%, quite acceptable in comparison to other related works.

16.
PLoS One ; 12(10): e0186949, 2017.
Article in English | MEDLINE | ID: mdl-29059257

ABSTRACT

A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis.


Subject(s)
Automation , Wet Macular Degeneration/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted , Tomography, Optical Coherence
17.
Biomacromolecules ; 18(9): 2849-2857, 2017 Sep 11.
Article in English | MEDLINE | ID: mdl-28792747

ABSTRACT

Cellulose coagulates upon adding water to its solutions in ionic liquids. Although cellulose remains in solution with much higher water contents, here we report the effect of 0-3 wt % water on solution rheology of cellulose in 1-butyl-3-methylimidazolium chloride and 1-ethyl-3-methylimidazolium acetate. Fourier transform infrared spectroscopy, thermal gravimetric analysis, and polarized light microscopy were also used to study water absorbance to the solutions. Tiny amounts of water (0.25 wt %) can significantly affect the rheological properties of the solutions, imparting a yield stress, while dry solutions appear to be ordinary viscoelastic liquids. The yield stress grows linearly with water content and saturates at a level that increases with the square of cellulose content. Annealing the solutions containing small amounts of water at 80 °C for 20 min transforms the samples to the fully dissolved "dry" state.


Subject(s)
Cellulose/chemistry , Elasticity , Hydrophobic and Hydrophilic Interactions , Ionic Liquids/chemistry , Viscosity , Imidazoles/chemistry , Rheology , Water/chemistry
18.
Carbohydr Polym ; 140: 393-9, 2016 Apr 20.
Article in English | MEDLINE | ID: mdl-26876866

ABSTRACT

The elastic moduli of PLA reinforced with 5 and 10wt.% CNF with the carrier, at a frequency (ω) of 0.07, were 67% and 415% higher, respectively, than that of neat PLA. The shear viscosity at a shear rate of 0.01 (η0.01) for PLA+10wt.% CNF was 32% higher than that of the neat PLA matrix. The η0.01 of PLA reinforced with 5wt.% CNF and the PHB carrier was similar to neat PLA. The tensile and flexural moduli of elasticity of the nanocomposites continuously increased with increased CNF loading. The results of the mechanical property measurements are in accordance with the rheological data. The CNF appeared to be better dispersed (less-aggregated nanofibers) in the PLA reinforced with 5wt.% CNF and the PHB carrier. Possible applications for the composites studied in this research are packaging materials, construction materials, and auto parts for interior applications.


Subject(s)
Cellulose/chemistry , Nanofibers/chemistry , Polyesters/chemistry , Food Packaging , Mechanical Phenomena , Rheology
19.
ACS Macro Lett ; 5(7): 849-853, 2016 Jul 19.
Article in English | MEDLINE | ID: mdl-35614764

ABSTRACT

The role of an interval of shear flow in promoting the flow-induced crystallization (FIC) for poly(ether ether ketone) PEEK was investigated by melt rheology and calorimetry. At 350 °C, just above the melting temperature of PEEK (Tm), a critical shear rate to initiate the formation of flow-induced precursors was found to coincide with the shear rate at which the Cox-Merz rule abruptly begins to fail. In cooling the sheared samples to 320 °C, FIC can be up to 25× faster than quiescent crystallization. Using rheology and differential scanning calorimetry, the stability of FIC-induced nuclei was investigated by annealing for various times at different temperatures above Tm. The persistence of shear-induced structures slightly above Tm, along with complete and rapid erasure of FIC-induced nuclei above the equilibrium melting temperature, suggests that FIC leads to thicker lamellae compared with the quiescently crystallized samples.

SELECTION OF CITATIONS
SEARCH DETAIL
...