Gongcheng Kexue Yu Jishu/Advanced Engineering Science (ISSN: 2096-3246) is a bi-monthly peer-reviewed international Journal. Gongcheng Kexue Yu Jishu/Advanced Engineering Science was originally formed in 1969 and the journal came under scopus by 2017 to now. The journal is published by editorial department of Journal of Sichuan University. We publish every scope of engineering, Mathematics, physics.
Gongcheng Kexue Yu Jishu/Advanced Engineering Science (ISSN: 2096-3246) is a peer-reviewed journal. The journal covers all sort of engineering topic as well as mathematics and physics. the journal's scopes are in the following fields but not limited to:
A compact six-port Multiple-Input Multiple-Output (MIMO) antenna that operates in the whole license-free Ultra-Wideband (UWB) spectrum of 3.1–10.6 GHz is shown in this research study. The six antennas are arranged in a way that allows for spatial variety. Conventional rectangular patch UWB antennas make up the antenna elements. This innovative MIMO antenna took into consideration and addressed the four main issues with antennas: low radiation efficiency, high mutual coupling, low impedance, and poor voltage standing wave ratios (VSWR). The microstrip patch approach was used in a Computer Simulation Technology (CST) environment to build the antenna. The antenna substrates were designed and all simulations were carried out using CST. The patch antenna material is copper, and the antenna's ground plane is FR-4 for the substrate design. The antenna has a 50-impedance feedline and measures 30 x 25 x 0.3 mm. After the performance characteristics of the antenna were examined, the S-parameters showed a broad frequency response ranging from 3.1 to 10 GHz. With a radiation efficiency of less than - 1dB (68%) over the frequency range, the system as a whole loses less power than is absorbed, making it extremely dependable. The system can have a positive maximum reflection since the antenna's VSWR is more than 1 and less than 2. As a result, this architecture works well for communication systems that need to transmit signals quickly and effectively.
This work focuses on the design and measurement of the distortion impact of an Ultra-wide-band radar antenna. The low-pass prototype with equal parts was used as the model. The model is capable of minimizing the insertion-loss while also delivering the required stopband attenuation. The network model was constructed based on the low-pass prototype model, using the theory of coupled cavity resonator. The construction of narrow-band filters relied on the coupling coefficients between resonators and the external Q factors at the two terminal resonators, which served as fundamental parameters. The microwave implementation included the utilization of λ/4 inductive coupled TEM-mode coaxial resonators. The suggested filter exhibits characteristics of being small, stable, readily manufacturable, and cost-effective.
This research aims to conduct a comparative investigation of the heat effects on the microwave performance of high electron-mobility transistors (HEMTs) using GaAs and GaN technology. In order to achieve this difficult objective, the impact of changes in the surrounding temperature on the microwave performance is assessed by using scattering parameter data and the related equivalent-circuit models. The devices under investigation are two High Electron Mobility Transistors (HEMTs) with an identical gate width of 200 μm. However, they are manufactured utilizing distinct semiconductor materials: GaAs and GaN technologies. The examination is conducted under controlled settings of both low and high temperatures, with the temperature range being adjusted from - 40 ◦C to 150 ◦C. The temperature's influence varies significantly based on the chosen operating state. However, the bias point is picked to provide a fair comparison between the two distinct technologies to the greatest extent feasible. Both technologies exhibit similar tendencies, although the temperature has a greater influence on the GaN device.
The goal of the current effort is to create and analyse a bioadhesive vaginal gel that is loaded with Itraconazole nanosponges to ensure longer residence time at the infection site, providing a favorable release profile for the drug. Methods: Nanosponges was prepared by solvent evaporation method in various ratios of Itraconazole to β-cyclodextrin. Physicochemical evaluation of Naosponges includes determination of Zeta potential, polydispersity, particle size analysis, entrapment efficiency and surface morphology by scanning electron microscopy (SEM). Drug excipient compatibility was established by FTIR and DSC studies. Bioadhesive gel was prepared using Carbopol /Hypromellose /Sodium Carboxymethyl cellulose /HPC, Propyl paraben and methyl paraben was used as a preservative. The pH was adjusted with triethanolamine which resulted in a translucent gel. The optimized Itraconazole nanospoges formulation was dispersed into the gel base. Nanosponges in gel formulations were evaluated for pH, viscosity, spreadability, extrudability and drug content. Ex vivo diffusion studies of the gel was determined on goat vaginal mucosa. In vitro drug release study was performed using cellophane membrane. Results: The optimized batch of IDLNS12 Nanosponges (drug-polymer ratio 1:1) showed entrapment efficiency of 90.44%. Particle size of all the formulations was observed below 310 nm. Regular and spherical particles were observed in the SEM photographs. The optimized gel formulation INSG4 (Carbopol and HPC) showed viscosity of 4464 cps at 2-10 RPM, gel strength recorded as 91.76N load, and spreadability of 35.72 g.cm/seconds. INSG4 showed 99.98% drug release at 12.0 hrs and mucoadhesive time of >12 hr. Conclusion: The study results suggest that Itraconazole- loaded β-cyclodextrin Nanosponges in mucoadhesive gel would provide a mean for sustained treatment of vaginal infections.
The teak tree (Tectona grandis) has several positive contributions to the environment quality wood raw materials, deep and strong roots to help maintain soil stability and prevent erosion and habitat for various types of animals. On the other hand, teak also has an important role in producing oxygen and rarely do people directly see the growth of teak trees from small to large because it can take decades. This study aims to academically develop the Teak Tree Computational Model (TTCM) by using the method of Functional-Structural Plant Modeling (FSPM) and the growth Grammar- related Interactive Modelling Platform (GroIMP). FSPM and GroiIMP operated to morphologically construct the virtual Teak tree growth from trunk, branch and leaf to analyze its environmental and economic contribution. The dataset used in this research was 20 years of growth of the teak tree in Saradan, East Java, Indonesia. The model can simulate the morphological growth mechanism of single and multi-teak trees and can predict the contribution environmentally and economically. The model simulated that one 20-year-age teak tree can produce 17 L per hour of oxygen and Indonesian Rupiah (IDR) 1,200,000 of wood while in the real world teak tree can produce about 15 L per hour of oxygen and IDR 850,000-1,550,000.
Global computer networks have enabled ordinary users, companies, organizations and medical institutions to gain virtually unlimited access to data arrays. Therefore, developing systems capable of ensuring the good performance and secure operation of a standard Computer Telecommunication Network (CTN) has become one of the most pressing tasks demanded in the medical industry. Consequently, this study aims to create an ordered chain of operations that can perform information encryption to enhance data transmission and exchange security. This study examines the existing ordered chains of encryption operations and assesses their strengths and weaknesses. In addition, a framework for implementing cryptographic algorithms is proposed. This algorithm structure enables verification of the existence of the correct key along the specified path, thereby enhancing the overall security of the system. The study results indicate that the optimal variant of encryption is the ordered chains of encoding operations that rely on cryptography. The results of the testing demonstrated that the developed ordered chain of operations exhibited several advantages compared to its analogs, with an efficiency that exceeded that of the analogs by more than fourfold. The implementation of the proposed ordered chain of operations would provide a significantly safer operation of a standard CTN in a typical Medical Institution (MI).
The objective of the work is to create and deploy a sophisticated pest detection system in agricultural areas that achieves an amazing 99.7% accuracy rate by utilizing the EfficientNet-B4 model. Crop productivity and food security are seriously threatened by pests, and conventional pest management techniques are frequently inaccurate and ineffective. The research makes use of cutting- edge deep learning methods, particularly the EfficientNet-B4 architecture, which is well-known for its exceptional picture categorization performance. The model is trained using a large dataset of various photos of crops that have been impacted by pests. The main novelty is the model's precision in identifying and categorizing pests, which enables early and focused pest management techniques. This system operates far better than current ones, giving farmers a dependable instrument to quickly identify and take care of pest-related problems. By using less chemical pesticides, the initiative helps to increase crop output, reduce agricultural losses, and promote sustainable farming methods.
Background: Headache is a common neurological disorder that affects millions of people worldwide. This study aimed to assess clinical characteristics, prescription pattern and concurrent diagnosis in headache patients attending Tertiary Care Hospital. Method: This is a retrospective, observational study of anonymised data from a data base of JP SUPER SPECIALITY HOSPITAL NEURO MEDICAL CENTER, TIRUPATI-517502, Andhra Pradesh, India, over the 3 months period from January 2023 to March 2023. The sample contained was 290 persons over the 3 months study period. It has detailed data on: in- and outpatients, based on the ICHD-3 diagnosis. Results: Among 290 patients, 179 were females, patients who suffered with headaches were mostly in the age group of 18-28. ‘Chronic Headache’ was the common type headache which was seen in 162 patients. Fibromyalgia was the major concurrent disease found in 89 headache patients. The major class of drugs prescribed was NSAIDS to 281 patients with common headache, whereas anti-migraine drugs like Triptans were specifically prescribed for the migraine patients. “Headache associated with periorbital pain holocranial throbbing type” was the major complaint received from 134 patients of study population during this study period. Conclusion: The most prevalent age group was18-28, in which females were dominant. ‘Chronic Headache’ was the common type of headache. Fibromyalgia was the major concurrent disease. NSAIDs were the major class of drugs prescribed for the patients with common headache, whereas Anti-migraine drugs like Triptans were specifically prescribed for the migraine patients.
Decision trees (DTs) play a crucial role in machine learning applications due to their quick execution and high interpretability. However, the training of decision trees is often time-consuming. In this concise summary, we propose a hardware training accelerator designed to expedite the training process. Our accelerator is specifically implemented on a fieldprogrammable gate array (FPGA) with a maximum operating frequency of 62 MHz. The architecture of our proposed accelerator leverages a combination of parallel execution to reduce training time and pipelined execution to minimize resource consumption. This design results in a significant acceleration of the training process. In comparison to a C-based software implementation, our hardware implementation is found to be at least 14 times faster. One notable feature of our architecture is its adaptability. The proposed design can be easily retrained for a new set of data using a single RESET signal. This on-the-go training capability enhances the versatility of the hardware, making it suitable for a wide range of applications.
With the rapid advance in digital network, information technology and digital communication has become very important to secure information transmission between the sender and receiver. Security is the important features in communication and other text information because of the intruders who wait for attack on data and chances to access the private information. There are two important type of techniques which provide securities are cryptography and steganography. Both are well-known and most important techniques which are used methods in information security for confidentiality of data exchange. One technique is cryptography, where the sender uses an encryption key to encrypt the message, this encrypted message is transmitted through the insecure public media, and decryption algorithm is used to decrypt the data/message with using key. The reconstruction of the original message/data is possible only if the receiver has the decryption key to use for decryption for recover the hidden message. The second method is steganography, where the hidden message is inserted in another object while using the algorithms. There are different kinds of cryptographic and steganography technique so we used different combinations of cryptography and steganography. Dual steganography is the security in which steganography and cryptography both are used together i.e. combine use of both techniques.
DDoS attacks, also known as distributed denial of service (DDoS) attacks, have emerged as one of the most serious and fastest-growing threats on the Internet. Denial-of-service (DDoS) attacks are an example of cyber attacks that target a specific system or network in an attempt to render it inaccessible or unusable for a period of time. As a result, improving the detection of diverse types of DDoS cyber threats with better algorithms and higher accuracy while keeping the computational cost under control has become the most significant component of detecting DDoS cyber threats. In order to properly defend the targeted network or system, it is critical to first determine the sort of DDoS assault that has been launched against it. A number of ensemble classification techniques are presented in this paper, which combine the performance of various algorithms. They are then compared to existing Machine Learning Algorithms in terms of their effectiveness in detecting different types of DDoS attacks using accuracy, F1 scores, and ROC curves. The results shows high accuracy and good performance
Machine learning refers to the process by which computers may be taught to do certain tasks using a set of algorithms and statistical models. Many different fields and industries may benefit from machine learning, including those dealing with medical diagnostics, audio identification, traffic prediction, statistical arbitrage, and many more. The traffic environment includes anything that might affect vehicle traffic, such as traffic lights, accidents, rallies, or even road maintenance. If the driver or passenger has ideas that are close to all of the above and the many everyday events that could affect traffic, they may make an educated decision. This will also be useful for cars in the future. The quantity of traffic data generated has increased dramatically over the last several decades, and big data approaches have been developed with a focus on transportation. Present traffic glide forecasting algorithms are insufficient for use in realworld, multinational scenarios since they only consider a subset of feasible visitor prediction models. Identifying the traffic flow is a time-consuming process because of the enormous quantity of data accessible for the transportation system. To simplify the analysis, we intended to use deep learning, genetic programming, soft computing, and machine learning to sift through the massive amounts of data collected by the transportation system. Applying Image Processing methods to the problem of traffic sign identification considerably facilitates the proper training of autonomous vehicles. In recent years,
Every second, the traffic monitoring system receives vast quantities of data on the movement of vehicles. There is a significant time and effort commitment involved in monitoring these KPIs. Managing and controlling traffic might be made easier with the use of a deep learning technique called a Convolutional Neural Network. Data from traffic monitors that have already been analyzed is used to construct the training dataset. Building the Traffic net requires transferring the network to the new domain and retraining it using data from apps relevant to traffic. Regional detection is one of the large-scale applications for this Traffic net. Even more crucially, it may be used in a variety of contexts. Impressive evidence of efficiency may be shown in the case study's faster discovery and improved accuracy. Its incorporation into a traffic monitoring system and, in the long run, an enhanced intelligent transportation system may be the result of the preliminary examination.
This research paper presents a comprehensive numerical study on unsteady double stratified flow through a Brinkman porous medium with heat generation and radiation effects. The Crank-Nicolson method is employed to solve the governing equations, providing accurate and stable numerical solutions. The investigation focuses on understanding the intricate interactions between fluid flow, heat generation, and radiation in a porous medium, which is particularly relevant in various engineering and environmental applications.
The point of this paper is to center around the impact of profound cryogenic treatment on the microstructure, mechanical and wear properties of Al 6061. The main goal was to comprehend how much wear conduct has indicated change with aluminum grades being dealt with cryogenically on the examples. To direct wear test Aluminum trial examination has been done on aluminum amalgams with cryogenic coolants. The cryogenic coolant has expanded the wear protection properties of aluminum up to 25% when contrasted with wear of noncryogenically treated aluminum. The cryogenic treatment was completed under three unique timings for three distinctive rpm's under differing loads. The paper additionally thinks about the smaller scale auxiliary changes under these differing conditions. The test examination of the paper reasons that cryogenically treated aluminum demonstrates increment in wear protection of about 25%.
The impact of Deep cryogenic treatment (DCT) on the metallurgical and mechanical properties of Al6061-T6 is examined in the present work. The combination was subjected to Deep CT at - 196 ºC for 36 h. Mechanical tests, for example, Brinell hardness test, ductile, and weakness tests were performed on both local and treated examples. It was watched that the mechanical properties, for example, hardness, yield quality, and extreme rigidity expanded by around 41, 27, and 13%, separately, for the treated example. The treated amalgam was portrayed by utilizing the procedures, for example, optical microscopy, vitality dispersive x-beam spectroscopy (EDS), and to watch the adjustments in the metallurgical highlights. SEM-EDS comes about show precipitation, better dissemination of second-stage particles, and higher separation thickness in the regarded compound when contrasted with the untreated composite. The treatment confers enhanced hardness and quality to the compound because of precipitation solidifying and high separation thickness. Break morphologies of the treated and the local examples were portrayed by utilizing examining electron microscopy and it was watched that the striations were denser in the treated example supporting the higher weariness quality
Now a days an earthquakes are more affected on concrete structures have been severely damaged or collapsed. So there is a need of to evaluate the seismic adequacy. We can’t avoid the future earthquakes but we can manage or prepare the building to safe construction, and to reduce the extent damage and loss. Earthquake is the disturbance that happens at some depth below the ground level which causes vibrations at the ground surface. A braced frame is a structural system commonly used in structures subject to lateral loads such as wind and seismic pressure. The members in a braced frame are generally made of structural steel, which can work effectively both in tension and compression. The buildings which do not designed for seismic force, may suffer extensive damage or collapse if shaken by a severe ground motion. The Pushover analysis first came practice in 1980’s, but the potential of the pushover analysis has been recognized for last two decades years. In this procedure mainly estimate the base shear and its corresponding displacement of structure. Pushover analysis is a very useful tool for the evaluation of New and existing structures.
In recent years, the utilization of marble powder has garnered considerable interest across multiple disciplines, including civil engineering and construction material sciences. The marble industry generates a substantial waste quotient, estimated at 30-40% of its total output, leading to serious environmental dust pollution. This research examines the effects on concrete properties when marble powder is used to partially substitute cement. The aim is to explore waste reduction strategies while maintaining the necessary structural integrity of the material. In this context, M40 grade concrete was modified by replacing cement with marble powder in proportions of 5%, 10%, 15%, 20%, and 25% by weight.The experiment involved testing the material's flexural, tensile, and compressive strengths at intervals of 7, 28, 56, and 90 days. Results indicate that replacing 10-15% of cement with marble powder in M40 grade concrete is feasible, preserving its characteristic strength profile and not diminishing its compressive capacity. This substitution not only lowers concrete production costs but also contributes to sustainable manufacturing practices. For this study, 20mm aggregate size was used, along with OPC 53 grade cement. Concrete modified with marble powder demonstrated a compressive strength increase of 20% over traditional concrete formulations.
The utilization of pressure-reducing valves stands as a highly effective method for managing pressure within a water distribution system, thereby minimizing leakage. To enhance sustainability and management, it is advisable to strategically position an appropriate number of pressurereducing valves within the water distribution system. A revised version of the reference pressure algorithm, sourced from existing literature, is employed to determine the optimal placement of valves using a simplified approach. However, when dealing with extensive water pipeline networks, the modified reference pressure algorithm falls short in pinpointing the most suitable valve locations. To address this limitation, a nodal matrix analysis is introduced to refine the modified reference pressure algorithm. This enhanced algorithm offers a preferable selection of pipeline segments for valve placement from the array of potential pressure-reducing valve sites generated by the adjusted reference algorithm, especially in intricate pipeline networks. The real-world application of this refined algorithm takes place in Campos do Conde II, a water network situated in, Kapra. By employing this algorithm, four pipeline locations are identified as optimal valve candidates, a notable improvement compared to the 22 locations suggested by the modified reference pressure algorithm. Consequently, this technique significantly enhances the precision of valve placement, contributing to an improved overall network optimization, sustainability, and management. Empirical findings from this study underscore the efficacy of the proposed algorithm, showcasing a substantial 20.08% reduction in water leakages across the water network.
Several data mining methods have been proposed in the literature to obtain good results. However, how to effectively use and reconstruct acquired patterns is still an open research question, especially in text mining. Most modern text mining techniques primarily adopt a termbased methodology, and therefore all suffer from the problems of multiple words and synonyms. Semantic word representation is a key task in natural language processing and text mining. The knowledge of features relation has shown their influence on classification in a variety of tasks. The features relation patterns are powerful in associating features associations and effectively bridge lexical gaps, facilitating numerous applications. Most research focuses on term processing by encoding contextual information. However, many potential relationships, such as relation association patterns and semantic-conceptual relationships are not accounted for well. In this paper, we propose an integrated method known as FRP-LSRA for enhancement in information classification using Feature Relation Patterns (FRP) and Latent Semantic Relation Analysis (LSRA). The FRP is constructed using a modified Bayesian mechanism that allows discovering the relation of features. It provides the primary knowledge of the data association with their related class. The constructed patterns by FRP are being utilized for the analysis of their semantic relation to classify the data accurately. The experiment analysis is performed over an SFPD dataset to evaluate the classification enhancement, the proposed FRP-LSRA achieve 3% improved accuracy with comparing the state-of-art classification methods.
Detecting brain tumors from MRI images is a critical task in medical imaging and plays a crucial role in early diagnosis, treatment planning, and patient care. Image processing techniques rely on handcrafted features, which might not capture the full complexity of brain tumors. They may struggle to represent subtle and discriminative features necessary for accurate detection. The proposed model focuses on the feature extraction using the ensemble ranking approaches. The model proposes “Kernel methods”, such as Multiple Kernel Learning (MKL), can be employed to combine features from multiple kernels or similarity measures, each obtained from different feature extraction methods. The extracted features are reduced further using the CNN and classified using the Random Forest algorithm. The main advantage of MKL is it can automatically learn the relevance of each kernel and perform feature selection by assigning appropriate weights to them. The learning process allows the model to focus on the most informative features while discarding less relevant ones. The integration of MKL features with CNN takes advantage of the diverse and informative representations provided by MKL and the hierarchical feature learning capability of CNNs. This approach can be particularly beneficial when dealing with multi-modal data, data from different sources, or complex tasks where the combination of complementary features can lead to improved performance and generalization.
Reversible data hiding in encrypted images (RDHEI) has been introduced for preserving image privacy and data embedding. RDHEI usually involves three parties; namely, the image provider, data hider, and receiver. On the security with key setting, there are three categories: share independent secret keys (SIK), shared one key (SOK) and share no secret keys (SNK). In SIK, the image provider and data hider must respectively and independently share secret keys with the receiver, whereas in SNK, no secret key is shared. However, the literature works proposed SNK-type schemes by using homomorphic encryption (with exorbitant computation cost). In this paper, we address shared one key (SOK) setting, where only the image provider shares a secret key with the receiver, and the data hider can embed a secret message without any knowledge of this key. To realize our SOK scheme in a simple manner, we propose a new technique by using multi-secret sharing as the underlying encryption, which indeed induces a blow up issue of the key size. For preserving the efficiency of the key size, we apply a compression by using lightweight cryptographic algorithms. Then, we demonstrate our SOK scheme based on the proposed techniques, and show effectiveness, efficiency, and security by experiments and analysis.
Currently, face recognition technology (FRT) has been applied ubiquitously. However, due to the abuse of personal face photos on social media, FRT has encountered unprecedented challenges which promote the development of face spoofing detection (also called face liveness detection or face anti-spoofing) technology. Traditional face spoofing detection methods usually extract features manually and distinguish real and fake faces through a single cue, which may make these methods have problems with low accuracy and generality. In addition, the effectiveness of existing methods is affected by illumination variations. To address the above issues, we propose a multi-scale color inversion dual-stream convolutional neural network, termed MSCI-DSCNN. One stream of the proposed model converts the input RGB images into grayscale ones and conducts multi-scale color inversion to obtain the MSCI images, which are then put into the improved MobileNet to extract face reflection features. The other stream of the network directly feeds RGB images into the improved MobileNet to extract face color features. Finally, the features extracted separately from the two branches are fused and then used for face spoofing detection. We evaluate the proposed framework on three publicly available databases, CASIA-FASD, REPLAY-ATTACK, and OULU-NPU, and achieve promising results. To further measure the generalization capability of the proposed approach, extensive cross-database experiments are performed and the results exhibit great effectiveness of our MSCI-DSCNN method.
The wireless communication revolution is profoundly impacting the fields of data networks, telecommunications, and integrated networks. Personal area networks, wireless local area networks, cellular networks, and cellular systems all offer the possibility of fully distributed portable computing and communication, and this promise is driving their development. The data transmission operations between the sensor nodes and the gateway can account for a significant part of the total energy consumption of a WSN. The approaches of optimization give the results in Modified Monarch Butterfly Optimization (MMBO) throughput, packet delivery ratio, average delay, and energy consumption. Adequate performance is verified by using the simulation analysis in the network nodes.
We introduce a novel method for speeding up image processing operators. Our method employs a fully convolutional neural network (CNN) that undergoes training using input-output pairs that demonstrate the operator's functionality. Following training, there is no need to execute the original operator. The trained CNN performs operations at full resolution and achieves constant runtime.In our study, we explored the impact of network architecture on approximation accuracy, runtime, and memory usage. Through careful analysis, we identified a specific architecture that strikes a balance between these considerations. We conducted evaluations on ten sophisticated image processing operators, encompassing various variational models, multiscale tone and detail adjustments, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators were approximated using the same model.Our experiments conclusively demonstrated that our method surpasses previous approximation techniques in terms of accuracy. On the MIT-Adobe dataset, we observed an 8.5 dB increase in approximation accuracy as measured by PSNR (from 27.5 to 36 dB), compared to existing schemes. Additionally, our approach achieved a three fold reduction in DSSIM when compared to the most accurate prior approximation scheme, all while maintaining superior speed.We verified that our models generalize well across different datasets and resolutions. Furthermore, we delved into several extensions of our approach, exploring additional possibilities for improvement and expansion.In summary, we propose an innovative method for accelerating image processing operators, utilizing a CNN trained on input-output pairs. Our approach outperforms previous approximation schemes in terms of accuracy, generalizes effectively, and presents opportunities for further development.
The smooth exchange and sharing of data across networked physical and virtual objects is made possible by the Internet of Things (IoT), a fast expanding and inventive field. It does away with the necessity for human intervention and provides cutting-edge services for many real-world uses. IoT envisions a world in which computing is pervasive and offers improved connection for a variety of applications around the globe.The 3- layered and 4-layered IoT designs that are now in use, however, have limits when it comes to fulfilling certain specifications for real-world applications. To solve this problem, we offer a 5-layered IoT architecture that emphasises useful and intelligent applications while properly interpreting IoT features. The development, definition, five-layered structure, technology, and applications of IoT are all covered in this architecture overview.In addition to its advantages, IoT is vulnerable to security vulnerabilities that jeopardise data sharing and exchange. To provide a secure environment, it is essential to solve these security challenges. The main privacy and security issues that each layer of the IoT architecture presents are highlighted in this study. We can create IoT solutions that prioritise data security and privacy by taking these issues into account.
Security concerns have become a major difficulty for researchers, developers, manufacturers, designers, and other stakeholders in the mobile phone industry as a result of rapid technological advancements in this space. It usually takes some time for such technology to be consumed by the market, which provides security teams with an opportunity to design and implement robust security measures. The proliferation of smartphones and the increasing number of people who use them to check their email, conduct financial transactions, and access other types of confidential information have created a dynamic new threat environment [1]. The fact that almost anybody can use the device has also contributed to widespread smartphone distribution ahead of the time when adequate safety measures have been implemented. When comparing the market shares of the various smartphone operating systems, Android now dominates. As the capabilities and functions of these phones advance, so does their susceptibility to security breaches. This article presents the results of a comprehensive investigation on the significance of android security, the nature of possible vulnerabilities, and the state of existing security practices for protecting against them
This is the second of a two-part paper summarizing and reviewing research in mechanical engineering design theory and methodology. Part I included 1) descriptive models; 2) prescriptive models; and 3) computer-based models of design processes. Part II includes: 4) languages, representations, and environments for design; 5) analysis in support of design; and 6) design for manufacture and the life cycle. For each area, we discuss the current topics of research and the state of the art, emphasizing recent significant advances. A final section is included that summarizes the six major areas and lists open research issues.
This study has evaluated the effectiveness of metallic materials as chill in sand casting of aluminium alloy. Four plates of dimension 165mm x 80mm x10mm were cast using sand mould. Steel, copper and brass chills in form of cylindrical bar of geometry 7mm in diameter and 50mm long were inserted, side by side at regular intervals of 30mm in each sand mould and the last sample was left unshelled. Experimentation involved testing of mechanical properties and metallographic analysis of cast samples. The results obtained revealed that the sample chilled with copper has the highest mechanical properties.
The process of pathfinding in video games has been studied for quite some time. It's the most talked about and infuriating artificial intelligence (AI) issue in games right now. There were several attempts to discover an optimum solution to the shortest route issue prior to the development of the A* algorithm. These included Dijkstra's algorithm, bread first search method, and depth first search algorithm. Since its inception, thousands of researchers have been drawn in to contribute to it. Many other algorithms and methods based on A* were developed. Several well-known A*-based algorithms and methods are analyzed and compared in this study. Its goal is to investigate the connections between the different A* algorithms. The first part of this article provides a high-level introduction to pathfinding. Then, using that discussion of the A* algorithm's inner workings as a springboard, we present many optimization methods from various vantage points. Finally, a conclusion is made and a number of real-world examples of the pathfinding algorithms' implementation in actual games are provided.
The patch antenna is a radiating element which radiates along the walls of edges. The size of the patch antennareduces by increasing the resonant frequency. The gain and bandwidth of single patch antenna is not sufficient for militaryapplications. For this purpose the patch array antenna is designed. In the present work a square patch antenna design atresonantfrequency 5GHzandits array analysis is presented.Array antennas haswideapplications in bothmilitary,wireless communications. The side lobe levels of linear and planar patch antenna array are - 13.5dB. This is not suggestiblefor trackingthe targets. In this present work the side lobe level are decreased in the patch array antenna by introducing the standardamplitude distribution and side lobe level is reduced from -13.5dB to -31.24dB. In this work raised cosine amplitudedistributionisusedto reduce side lobe levelupto-31.24dB.
.Antenna is a device which radiates electromagnetic energy intofree space in all directions single antenna characteristics like high beam width, low gain and low bandwidth are notsufficient in radar communication system for beam steering array antennas are designed for improving the parameters of beam width, gain and bandwidth. for this array of Horn antenna is designed. In the conventional arrays side lobe level - 13.5 dB is the obstacle to find the object in the radar system since mainbeam to first side lobe level is -13.5 dB. In the first side lobe level the most of the power is diverted from main beam, toovercome this and reduce the side lobe level is the array system. Standard amplitude distribution is used to reduce side lobelevel. In this work triangular amplitude distribution is used to reduce the side lobe level up to -26.8dB. The standard Hornantenna is used in this work to produce narrow beams and high gain. By neglecting inter element interference the desiredHorn arrays for N=10, 20, 40, 60 are designed. By adopting standard amplitude distribution to these arrays side lobe levelarealso reduced andarecompared withtheisotropicarrays.Theresultscomeupwith goodagreement.
.To account for the consequences of uncertainty in the Distribution Static Compensator (DSTATCOM) allocation and sizing issue, this study provides a novel stochastic framework based on the probabilistic load flow. To capture the uncertainty associated with the prediction inaccuracy of the loads, the suggested technique is based on the point estimate method (PEM). In addition, a novel optimization technique inspired by the bat algorithm (BA) is developed for conducting global searches. Minimizing overall active power losses and minimizing bus voltage variation are the objective functions to be studied. The concept of inter-active fuzzy satisfying approach is used in the multi-objective formulation to achieve an appropriate balance between the optimization of both the objective functions. The IEEE 69-bus distribution system is used to evaluate the practicability and satisfactory performance of the suggested technique.
.This article is a summary of a larger research project on Smart Grid (SG) and the function of AMI in SG. The poll was taken as part of research on the viability of establishing a Net-Zero neighbourhood in a city in Ontario, Canada. SG is not just one kind of technology but rather an amalgam of several disciplines across engineering, communication, and management. As the backbone of SG, which is in charge of gathering all the data and information from loads and consumers, this article presents AMI technology and its current position. Along with DSM, AMI is in charge of creating control signals and directives to carry out the required control activities. In this work, we provide an overview of SG and its characteristics, clarify the connection between SG and AMI, describe the three primary components of AMI, and address relevant security concerns
.In this study, we analyse a multiple-input-multiple-output (MIMO) amplify-and-forward (AF) relay channel with inaccurate channel estimations on all links, which necessitates cooperative optimization of the source precoder, relay transceiver, and destination equalizer. Nonconvexity and the lack of closed-form solutions characterize the joint optimization problem under consideration. The optimization of a single variable in the presence of fixed others, however, has been proven to be a convex optimization problem amenable to efficient solution through interior-point techniques. In this setting, the direct link component of the AF MIMO relay channel has inspired the proposal of an iterative approach with assured convergence. For the double-hop relay case without the receive-side antenna correlations in each hop, it has also been demonstrated that the global optimality can be confirmed by solving the remaining joint power allocation using the Geometric Programming (GP) technique under a high signal-to-noise ratio (SNR) approximation, as the structures of the source precoder, the relay transceiver, and the destination equalizer all have closed forms. To guarantee the iterative strategy provides pretty decent solutions with an appropriate complexity in the latter scenario, we have evaluated their performance using simulations. When compared to a no robust system, which uses approximated channels as if they were real, simulation results confirm the robustness of the suggested architecture
.The potential of vehicle-to-grid (V2G) devices is explored as a means of mitigating the intermittent nature of large-scale wind generation. The process begins with the planning and modelling of an energy management and efficiency system. The desired amount of grid-connected wind power, the amount of power needed for electric vehicles (EVs), and the amount of power stored in supercapacitors are all determined using the wavelet packet decomposition technique. The knapsack issue is then used to create the energy management model for EVs, which can assess the requirements of an EV fleet. In addition, a dynamic programming technique is used to create a delivery strategy that is tailored for the use of electric vehicles and wind power. A case study demonstrates that the energy management and optimization method for V2G systems achieves noticeable performance improvements over benchmark techniques. Based on the results of a case study, it is clear that the energy management and optimization approach for V2G systems significantly outperforms the standard methods.
.We propose compressive transmission, which uses CS as the channel code and directly transmits multi-level CS random projections through amplitude modulation, inspired by the CS theory and its strong association with low-density parity-check code. Compressive collaboration mechanisms inside a relay channel are the topic of this essay. In this study, we examine and quantify the possible rates of four decode-and-forward (DF) techniques in a three-terminal half-duplex Gaussian relay channel: receiver diversity, code diversity, consecutive decoding, and concatenated decoding. Numerical computation and simulated experiments are used to evaluate the four different plans. We also analyse a different source channel coding strategy for transmitting sparse sources and compare it to compressive collaboration. Compressive collaboration has significant promise in terms of transmission efficiency and channel adaptability.
.Wireless sensor networks have recently received a lot of interest in both the business world and our everyday lives because to the proliferation of sensor technology, MEMS, wireless communications, and the widespread use of wireless sensor. This study develops a wireless sensor network-based agricultural environment monitoring system, complete with hardware and software designs for sensor nodes, to achieve agricultural modernisation and agricultural environment protection. It has been shown via experimentation that the system can achieve remote real-time monitoring for unattended farm environment monitoring with a low power consumption, steady operating, and high accuracy.
.Platforms for the Industrial Internet of Things make it possible to make better decisions based on the data at hand, which in turn boosts efficiency in manufacturing and other commercial proprietary, and coupled with particular IoT gear, data interchange and provisioning between the data sources and platform services continue to be an issue. As a result, we propose and describe in depth an open-source software-based solution called Thing to Service Matching (TSMatch), which enables semantic matching at a fine-grained level between accessible IoT data and services. The report also includes an assessment of the proposed solution's performance in a testbed setting and details its deployment in two distinct Aerospace production scenarios.
.Using an AI algorithm, the capacity of wireless multi-channel networks may be increased. It's possible that network performance may increase if interference was reduced. This method has three stages: first, a model of the wireless environment is created; second, performance is optimised using the appropriate tools; and third, routing is improved by carefully picking performance metrics. The communication in wireless networks is improved by the use of an artificial bee colony optimization method with evaluation characteristics. This technique uses the straightforward actions of bee agents to make synchronised and distributed routing choices. The MATLAB simulations clearly show the benefits of this technique. As compared to the current state-of-the-art models, the performance of the routing algorithm inspired by nature is much higher. Even a very basic agent model has the potential to boost the network's performance. When trying to maximise the output of a routing protocol, the breadth-first search version is used to find and deterministically weigh all of the possible pathways across a network.
.This article discusses the Internet of Things (IoT), including its analysis, techniques and means of protection, the potential of employing edge computing to reduce traffic transmission, the decentralisation of decision-making systems, and information security. There was intensive research into the ways in which IoT systems are attacked, and safeguarding suggestions were developed as a result.
.You have purchased a brand new home security device. The package promises that thedevice will give you full control of your home, allowing you to do everything from control thelights to see who’s knocking at the door. It communicates through your home network usingsome sort of communication protocol, and perhaps even lets you set a password. Installationsimply requires pairing the device to the central Internet of Things hub in your home, like pairingyour phone to a Bluetooth speaker. All seems right in the world. But what if the very device that you purchased to secure you home were a portal forattackers to gain access. What if there were open source tools on GitHub that anyone allowedanyone with a computer to intercept the messages being passed between you and your device.What if there were a search engine as simple as Google that specifically found IP addresses ofdevices such as yours, and allowed anyone to see the video content it captured with the click of abutton. What if the personal computer security risks of the mid 1990‟s resurfaced, but on alarger, much riskier scale. What if your security device wasn‟t very secure at all?
.This paper studies the process of speaker identification over Bluetoothnetworks. Bluetooth channel degradations are considered prior to the speakeridentification process. The work in this paper employs Mel-frequency cepstral coefficients for feature extraction. Features are extracted from different transforms of the received speech signals such as the discrete cosine transform (DCT), signal plus DCT, discretesine transform (DST), signal plus DST, discrete wavelet transform (DWT), and signal plus DWT. A neural network classifier is used in the experiments, while the training phase uses clean speech signals and the testing phase uses degraded signals due to communication over the Bluetooth channel. A comparison is carried out between the different methods of feature extraction showing that the DCT achieves the highest recognition rates
.The separation of hardware and software is a vital aspect in developing a flexible embedded system. In order to reach the high performance of dedicated hardware, computer architectures that can change their hardware to each application are being designed, and reconfigurable computing is a potential way to resolving the conventional trade-off between flexibility and performance. In this research, we first review and describe existing hardware and software partitioning techniques before proposing a novel approach for task division and scheduling that takes use of the dynamic reconfiguration and delay of reconfigurable hardware. The suggested method divides a massive programme into smaller, more manageable jobs, each of which is related to the others through constraints. And based on the sequence in which the activities were carried out, a directed acyclic graph (DAG) was created to illustrate the connections between them. Then, a method called GATS, which combines the Genetic Algorithm and the Tabu Search algorithm, is used to map the particular application described in the DAG to the hardware and software platform. Priority-based scheduling allows for the quickest possible assignment and execution sequence of tasks. The testing results demonstrate the method's strong performance and its ability to transfer the application task to the reconfigurable system.
.Recent robotics advancements are making it possible to deploy large numbers of low-cost robots for tasks like surveillance and search. However, coordinating a group of robots to perform these kinds of tasks is still difficult. Recent research papers on multi-robot systems are summarised in this report. It's divided into two sections. The first section covered research into the pattern formation problem; specifically, how robots can be commanded to form a pattern and keep it. The second section examines the research into adaptive strategies for managing networks of robots. In particular, we've looked into (1) how evolution is used to generate group behaviours, and (2) how learning (lifelong adaptation) is used to make multi-robot systems respond to changes in the environment and in the capabilities of individual robots.
.The goal of this researchchanged into to have a look ateachnormal and abnormal dynamics of spin-triplet states (STSs) withinside the case of one-photon and one-magnon interactions with the various magnetic subject and the lattice, respectively. The anisotropic normal dynamics of STSs of molecular unmarried crystals in 0steady and vulnerablevarious magnetic fields (weak spotapproach the absence of saturation on theconsistentnation and of the nutation on the pulse EPR) directed alongside the molecular axes, changed into analytically investigated. The equations have been derived for the unfastenedmovement of the pattern magnetization, describing its linear oscillations alongside that molecular axis, alongside which its nonzero preliminarypricechanged into created. The tensor of the consistent-nation dynamical susceptibility to the varioussubjectchanged into found. The end result of the motion of a quick MW pulse on STS changed into analytically described, containing a periodic dependence at the pulse period and its detuning. The anisotropic abnormal dynamics of electron spin-lattice relaxation (SLR) at its one-phonon mechanism changed into investigated without the excessive temperature approximation over the phonon temperature; the SLR costs of the separate transitions of STS have been calculated; the corresponding SLR possibilitieshave been written withinside the form, which supposes the fractal dimensionality d of a lattice; the consequences with d=4/three agreed nicely with the experimental statistics in STS of the buried tryptophan of ribonuclease
.The scenarios opened by the increasing availability, sharing and dissemination ofmusic across the Web is pushingfor fast, effective and abstract ways oforganizingand retrieving music material. Automatic classification is a central activity to modelmost of these processes, thus its design plays a relevant role in advanced MusicInformation Retrieval. In this paper, we adopted a state-of the-art machine learning algorithm, i.e. Support Vector Machines, to design an automatic classifier of musicgenres. In order to optimize classification accuracy, we implemented some alreadyproposed features and engineered new ones to capture aspects of songs that havebeen neglected in previous studies. The classification results on two datasets suggestthat our model based on very simple features reaches the state-of-art accuracy (onthe ISMIR dataset) and very high performance on a music corpus collected locally.
.When it comes to situations with low sight, image improvement is a crucial pre-processing stage for many computer vision apps. In this paper, we elaborate on low dynamic range (LDR) image improvement and high dynamic range (HDR) image tone mapping are two applications of a united two pathway paradigm that draws inspiration from biological vision, particularly the early visual processes. There are two distinct visual paths that receive the incoming picture. These are the structurepathway and the detail-pathway, which are analogous to the M-pathway and the P-pathway in the early visual system. To manage visually complex landscapes with changing lighting conditions, the structure-pathway employs an expanded biological normalization model to combine global and local brightness adaptation. However, in the detailpathway, based on local energy loading, the increase of details and the reduction of local cacophony are accomplished. Finally, the results of the structurepathway and the detail-pathway are combined to improve the picture in dim light. In addition, with some tweaks, the suggested model can be used for tone mapping of HDR pictures. Extensive tests on three datasets (two LDR picture datasets and one HDR scene dataset) demonstrate that the suggested model is capable of effectively completing the aforementioned visual improvement tasks, while also outperforming the associated state-of-the-art techniques.
.Non-negative matrix factorization (NMF) has been suchcess fully used in audio source separation and parts-based analysis; however, iterative NMF algorithms are comporationally intensive, and therefore, time to convergence is very slow on typical personal computers. In this paper, we describe high performance parallel implementations of NMF developed using OpenMP for shared-memory multicore systems and CUDA for many-core graphics processsores. For 20 seconds of audio, we decrease running time from 18.5 seconds to 2.6 seconds using OpenMP and 0.6 seconds using CUDA. These performance increases allow source separation to be carried out on entire songs in a number of seconds, a process which was previously I’mpractical with respect to time. We give insight into how such significant speed gains were made and encourage the development and use of parallel music information retrieval software.
.Scientists have relied on the Kepler scientific workflow system to help them automate experiments across many different fields using distributed computing platforms. An assigned director oversees the execution of a process in Kepler. Users must still choose the computing resources that will run the workflow's tasks. A workflow scheduler that can allocate workflow tasks to resources for execution is needed to further reduce the technical effort required by scientists. We evaluate numerous cloud workflow scheduling methods to determine what data must be exposed for a scheduler to successfully plan for the execution of a Kepler process in the cloud. We explain the value by discussing the advantages of each different kind of data about workflow jobs, cloud resources, and cloud service providers
.Analysing text for emotions like happiness or sadness is called sentiment analysis. In order to narrow down a large feature collection, this work employs a number of feature selection methods, including Mutual information, Chi-square, Information gain, and TF-if. These procedures are assessed using a dataset of 2000 MOVIE reviews. Support vector machine from the weka9 tool is used to carry out the categorization. We also look at the question of what feature works best for determining reviewers' emotions. Word functions, POS tags, and larger word structures are also part of our feature set.
.Specifying and managing bioinformatics studies is becoming more commonplace via the use of scientific workflow management systems. Bioinformaticians like their programming paradigm because it allows them to quickly construct elaborate data processing pipelines. A graph structure forms the basis of this kind of model, with nodes standing in for individual bioinformatics activities and connections for the flow of information between them. There may be consequences for the reusability of scientific operations when the complexity of such network structures grows over time. In this paper, we advocate for the Taverna model as a means to efficiently design workflows. We contend that "anti-patterns," a word often used in program design, are a major cause of the problems associated with reuse since they imply the usage of idiomatic forms that result in too intricate design. This work's key contribution is a mechanism for automatically identifying such anti-patterns and replacing them with alternative patterns that reduce the structural complexity of the process. This approach to rewriting routines will improve operational efficiency while also improving the user experience (via simpler design and maintenance). (Easier to manage, and sometimes to exploit the latent parallelism amongst the tasks).
.Vegetable, fruit, and cookie shops should record all transaction details to better understand customer preferences and stock their shelves. The MSP430 microcontroller and the PTR2000 wireless connection module form the basis of the electronic scale's design. The scale is able to do more than just weigh people; it can also communicate with a host computer and follow commands. The strain bridge output signal circuit, time/data circuit, memory circuit, wireless module, and so on is all shown. Moving-average filtering is used to refine the data from the measurements. Visual Basic is used to create the user interface on the computer side
.It's not easy to get people working in the same company to coordinate and network with one another and share information and resources. Competencies, positions, and the structural characteristics of the company, along with communication preferences and group assets, can all work against a productive exchange of ideas and information. Technology, and the Internet in particular, has the ability to alleviate these problems while also fostering the proliferation of creativity and efficiency in the workplace. The term "Enterprise2.0" has emerged to describe the incorporation of essential Web2.0 features, such as user engagement, into corporate settings; before, these platforms were known as "Intranets." In this white paper, we discuss the development and implementation of the open source Enterprise2.0 platform at the research institution Fondazione Bruno Kessler (FBK), which is home to over 400 academics and professionals engaged in a wide range of academic disciplines. We also examine the actions of various user types and their group dynamics, as well as conduct research into the platform's use and communication trends. Our preliminary study shows that users are more likely to engage in the most common social activities -- including conversing and browsing profiles -- with members of their own research group than with other colleagues. When we look at how central people are in conversation and profile view networks based on how long they've been part of the FBK community, we discover that newer members have a greater betweenness centrality. 2011 Elsevier Ltd. Published.
.In the realm of machine learning, probabilistic models are widely regarded as some of the best available. Very little research has been done to evaluate the performance of two or more classifiers used in conjunction in the same classification task, despite the fact that famous probability classifiers show very excellent performance when used separately in a particular classification task. In this study, we employ two probability strategies for document classification: the naïve Bayes classifier and the Maximum Entropy model. Then, we merge the two sets of findings using two different operators— Max and Harmonic Mean—to boost the categorization performance. Results from an evaluation conducted on the "ModApte" subset of the Reuters-21578 dataset demonstrate that the suggested technique improves final evaluation accuracy significantly
.DCCP, or Datagram Congestion Control algorithm, is a transit layer algorithm that ensures dependable data transfer despite the presence of unstable connections. DCCP includes a congestion management device that modifies the packet transmission rate based on the status of the network. When it comes to bottleneck losses and cellular connection failures, however, DCCP does not make any distinctions. caused by fading, which causes extra rate changes. In this article, we suggested a method to improve DCCP's usage of available capacity in a cellular network. We used a cross-layer loss separation method to split out the effects of crowding loss from those of fading. The true fading loss rate can be inferred thanks to the cross-layer based method that identifies frame loss in the data connection layer in real-time. After that is done, the transit layer packet loss will not include the fading loss. Once the originator has determined the congestion loss rate accurately, they can use the DCCP rate control process to make transmission rate adjustments that are reflective of the actual congestion condition along the transmission route. Our simulation findings indicate that when the fading loss rate in a wireless network ranges from 5% to 15%, DCCP with our suggested CCID 3 rate control algorithm can detect fading loss and increase transmission rates by between 4.7% and 15.5%.
.Using B+ Trees and consistent hashes, the authors of this work suggest a new approach to distributed real-time database indexing. First, in a distributed system, all storage nodes and TAG points are mapped to a circular hash space. This allows us to find exactly where each TAG point is stored. Second, construct a TAG point hash table that stores the index location for each TAG point in each storage node. Finally, a B+ Tree index is created to store and catalogue information on a single TAG point through time. The suggested strategy has been shown to be effective via both theoretical analysis and experimental findings
.Several different types of services for the Internet of Things across large geographic areas have been set up. Most IoT services need the transmission of a large number of very small data packets across long distance networks. This calls for a simplification of the transfer processes. MQ Telemetry Transport is a viable contender for usage as a transfer mechanism (MQTT). In this work, we suggest a virtual ring design for a distributed MQTT broker. ISO/IEC JTC 1/SC 41 describes an IoT Data Exchange Platform, and this design follows those specifications. This paper describes the functionality of a distributed broker architecture that use a virtual ring network for real-time communication, and it demonstrates the architecture's superiority via a performance study using queuing models.
.The Internet of Things represents a new paradigm in information technology that may be adapted for use in continuous patient monitoring. Physiological characteristics such as heart rate and body temperature are monitored in real time using biomedical sensors and microcontroller, and this article reviews and details the implementation of this technique. Using a prototype Internet of Things platform, this device may monitor vital signs and provide real-time updates.
.According to the World Health Organization (WHO), one in four people will be affected by mental disorders at some point in their lives. However, in many parts of the world, patients do not actively seek professional diagnosis because of stigma attached to mental illness, ignorance of mental health and its associated symptoms. In this paper, we propose a model for passively detecting mental disorders using conversations on Reddit. Specifically, we focus on a subset of mental disorders that are characterized by distinct emotional patterns (henceforth called emotional disorders): major depressive, anxiety, and bipolar disorders. Through passive (i.e., unprompted) detection, we can encourage patients to seek diagnosis and treatment for mental disorders. Our proposed model is different from other work in this area in that our model is based entirely on the emotional states, and the transition between these states of users on Reddit, whereas prior work is typically based on contentbased representations (e.g., n-grams, language model embeddings, etc). We show that %2