Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/31359
DC FieldValueLanguage
dc.contributor.authorKaragiannopoulos, Stavros-
dc.contributor.authorAristidou, Petros-
dc.contributor.authorHug, Gabriela-
dc.contributor.authorBotterud, Audun-
dc.date.accessioned2024-02-20T06:30:41Z-
dc.date.available2024-02-20T06:30:41Z-
dc.date.issued2024-05-
dc.identifier.citationEnergy and AI, 2024, vol. 16, articl. no. 100342en_US
dc.identifier.issn26665468-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/31359-
dc.description.abstractWhile moving towards a low-carbon, sustainable electricity system, distribution networks are expected to host a large share of distributed generators, such as photovoltaic units and wind turbines. These inverter-based resources are intermittent, but also controllable, and are expected to amplify the role of distribution networks together with other distributed energy resources, such as storage systems and controllable loads. The available control methods for these resources are typically categorized based on the available communication network into centralized, distributed, and decentralized or local. Standard local schemes are typically inefficient, whereas centralized approaches show implementation and cost concerns. This paper focuses on optimized decentralized control of distributed generators via supervised and reinforcement learning. We present existing state-of-the-art decentralized control schemes based on supervised learning, propose a new reinforcement learning scheme based on deep deterministic policy gradient, and compare the behavior of both decentralized and centralized methods in terms of computational effort, scalability, privacy awareness, ability to consider constraints, and overall optimality. We evaluate the performance of the examined schemes on a benchmark European low voltage test system. The results show that both supervised learning and reinforcement learning schemes effectively mitigate the operational issues faced by the distribution network.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofEnergy and AIen_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 Internationalen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectSupervised learningen_US
dc.subjectReinforcement learningen_US
dc.subjectDeep deterministic policy gradienten_US
dc.subjectDecentralized controlen_US
dc.subjectActive distribution systemsen_US
dc.titleDecentralized control in active distribution grids via supervised and reinforcement learningen_US
dc.typeArticleen_US
dc.collaborationCyprus University of Technologyen_US
dc.collaborationETH Zurichen_US
dc.collaborationMassachusetts Institute of Technologyen_US
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen_US
dc.journalsOpen Accessen_US
dc.countryCyprusen_US
dc.countrySwitzerlanden_US
dc.countryUnited Statesen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1016/j.egyai.2024.100342en_US
dc.relation.volume16en_US
cut.common.academicyear2023-2024en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.openairetypearticle-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.languageiso639-1en-
item.fulltextWith Fulltext-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.orcid0000-0003-4429-0225-
crisitem.author.parentorgFaculty of Engineering and Technology-
Appears in Collections:Άρθρα/Articles
Files in This Item:
File Description SizeFormat
1-s2.0-S2666546824000089-main.pdf1.6 MBAdobe PDFView/Open
CORE Recommender
Show simple item record

Page view(s)

87
Last Week
0
Last month
1
checked on Nov 21, 2024

Download(s)

78
checked on Nov 21, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons