Monday, June 3, 2019

Multi-Campus ICT Equipment Virtualization Architecture

Multi-Campus ICT Equipment Virtualization ArchitectureMulti-campus ICT equipment virtualization calculating machine architecturefor streak and NFV integrated serviceAbstract- We propose a virtualization architecture for multicampusinformation and communication technology (ICT)equipment with integrated bedim and NFV capabilities. The butt of this proposal is to migrate most of ICT equipment oncampus set forth into hide and NFV platforms. Adopting thisarchitecture would make most of ICT services secure andreliable and their hazard convalescence (DR) economicallymanageable.We also analyze a speak to function and show appeal advantages ofthis proposed architecture, describe carrying into action designissues, and report a preliminary experimentation of NFV DRtransaction. This architecture would encourage academicinstitutes to migrate their own ICT dodgings set(p) on theirpremises into a befoul environments.Keywords NFV, Data Center Migration, Disaster Recovery,Multi-campus p rofitI. INTRODUCTIONThere atomic number 18 many academic institutions that have multiplecampuses located in different cities. These institutions inviteto provide information and communication technology (ICT)services, such(prenominal) as E-learning services, equally for all studentson each campus. Usually, information technology (IT)infrastructures, such as occupation servers, argon deployed at amain campus, and these servers are bothered by students oneach campus. For this purpose, each local area earnings(LAN) on each campus is machine- ingressible to a main campus LANvia a virtual private vane (VPN) over a wide areanetwork (WAN). In addition, lucre approach path service isprovided to all students on the multi-campus environment.To access the Internet, protective covering devices, such as firewalls andintrusion detection systems (IDSs), are indispensable as theyprotect cipher resources from poisonous cyber activities.With the emergence of virtualization technologies s uchas the cloud computing1 and network functionsvirtualization (NFV)2, 3, we expected that ICTinfrastructures such as compute servers, storage devices, andnetwork equipment send away be moved from campuses todata essences (DCs) economically. some organizations havebegun to move their ICT infrastructures from their ownpremises to outside DCs in station to improve security,stability, and reliability. Also, there are a lot of contributionsto archiving DR capabilities with cloud technologies 4, 5,6. Active-passive replication or industrious-active replication areexpected techniques that archive DR capabilities. In thesereplications, a trim funding system is requireddedicatedly at a secondary site. With migration convalescence 4,these backup resources can be shared among many users.These studies mainly focus on the application servers. While,integrated DR capability for ICT infrastructures, bothapplication and network infrastructures, are be quiet immature.We propose a multi-camp us ICT equipment virtualizationarchitecture for integrated cloud and NFV capabilities. Theaim of this proposal is to migrate entire ICT infrastructureson campus premises into cloud and NFV platforms.Adopting this architecture for multi-campus networks wouldimprove access link utilization, security device utilization,network transmission delay, disaster allowance account, andmanageability at the same time.We also analyze the cost function and show costadvantages of this proposed architecture.To evaluate the feasibility of our proposed architecture,we built a test bed on SINET5 (Science InformationNETwork 5) 7, 8, 9. We describe the test-bed design,and preliminary experimentation on reducing the recoverytime of VNF is reported.The rest of this paper is organized as fol utters. Section IIshows background of this work. Section III shows proposedmulti-campus network virtualization architecture. Section IVshows an evaluation of the proposed architecture in terms ofcost advantages and imp lementation results. Section Vconcludes the paper, and future work is discussedII. BACKGROUND OF THIS WORKSINET5 is a Japanese academic backbone network forabout 850 research institutes and universities and providenetwork services to about 30 million academic users.SINET5 was wholly constructed and put into operation inApril 2016. SINET5 plays an important authority in supporting awide range of research fields that need high-performanceconnectivity, such as high-energy physics, nuclear fusionscience, astronomy, geodesy, seismology, and computerscience. Figure 1 shows the SINET5 architecture. It providespoints of presence, called SINET-data meanss (DCs), andSINET DCs are deployed in each prefecture in Japan. Oneach SINET DC, an profit protocol (IP) router, MPLS-TPsystem, and ROADM are deployed. The IP routeraccommodates access lines from research institutes anduniversities. altogether Every pairs of mesh protocol (IP) routersare connected by a paier of MPLS-TP paths. These pathsa chieves low latency and high reliability. The IP routers andMPLS-TP systems are connected by a nose candy-Gbps-basedoptical path. Therefore, data can be transmitted from aSINET DC to another SINET DC in up to 100 Gbps throughput. In addition, users, who have 100 Gpbs accesslines, can transmit data to other users in up to 100 Gbpsthroughput.Currently, SINET5 provides a direct cloud connectionservice. In this service, commercial cloud providers connecttheir data contracts to the SINET5 with high-speed link such as10 Gbps link directly. Therefore, academic users can accesscloud computing resources with very low latency and highbandwidth via SINET5. Thus, academic users can receivehigh-performance computer communication betwixtcampuses and cloud computing resources. Today, 17 cloudservice providers are directly connected to SINET5 and morethan 70 universities have been using cloud resources directlyvia SINET5.To evaluate virtual technologies such as cloud computingand NFV technologie s, we constructed at test-bed platform(shown as NFV platform in fig. 1) and will evaluate thenetwork delay effect for ICT service with this test bed. NFVplatform are constructed at 4 SINET-DCs on major citiesin Japan Sapporo, Tokyo, Osaka, and Fukuoka. At each site,the facilities are composed of computing resources, such asservers and storages, network resources, such as layer-2 turn overes, and controllers, such as NFV orchestrator, andcloud controller. The layer-2 switch is connected to aSINET5 router at the same site with high speed link,100Gbps. The cloud controller configures servers andstorages and NFV orchestrator configures the VNFs on NFVplatform.And user can setup and release VPNs betweenuniversities, commercial clouds and NFV platformsdynamically over SINET with on-demand controller. Thison-demand controller setup the router with NETCONFinterface. Also, this on-demand controller setup the VPN corelatedwith NFV platform with REST interface.Today there are many universitie s which has multiplecampus deployed over wide area. In this multi-campusuniversity, many VPNs (VLANs), ex hundreds of VPNs, aredesired to be configured over SINET to extend inter-campusLAN. In order to satisfy this demand, SINET starts newVPN services, called virtual campus LAN service. With thisservice, layer 2 domains of multi-campus can be connectedas like as layer 2 switch using preconfigured VLAN rages(ex. 1000-2000).III. PROPOSED MULTI-CAMPUS ICT EQUIPMENTVIRTUALIZATION ARCHITECTUREIn this section, the proposed architecture is described.The architecture consists of two parts. First, we describe thenetwork architecture and clarify the issues with it. Next, aNFV/cloud control architecture is described.A. Proposed multi-campus network architectureMulti-campus network architecture is shown in Figure 2.There are two legacy network architectures and a proposednetwork architecture. In legacy network architecture 1 (LA1),Internet trading for multiple campuses is delivered to a mainca mpus (shown as a green line) and checked by securitydevices. After that, the internet traffic is distributed to eachcampus (shown as a blue line). ICT Applications, such as Elearningservices, are deployed in a main campus and accesstraffic to ICT application is carried by VPN over SINET(shown as a blue line). In legacy network architecture 2(LA2), the Internet access is different from LA1. TheInternet access is directly delivered to each campus andchecked by security devices deployed at each campus. In theproposed architecture (PA), the main ICT application ismoved from a main campus to an external NFV/cloud DC.Thus, students on both main and sub-campuses can accessICT applications via VPN over SINET. Also, internet traffictraverses via virtual network functions (VNFs), such asvirtual routers and virtual security devices, located atNFV/cloud DCs. Internet traffic is checked in virtual securitydevices and delivered to each main/sub-campus via VPNover SINET.There are pros and cons bet ween these architectures.Here, they are compared across five points access linkutilization, security device utilization, network transmissiondelay, disaster tolerance, and manageability.(1) Access link utilizationThe cost of an access link from sub-campus to WAN issame in LA1, LA2 and PA. While, the cost of an access linkfrom a main campus to WAN of LA1 is larger than LA2 and PA because redundant traffic traverses through the link.While, in PA, an excess access link from a NFV/cloudDC to WAN is required. Thus, evaluating the total access linkcost is important. In this evaluation, it is assumed thatadditional access links from NFV/cloud DCs to WAN areshared among multiple academic institutions who use theNFV/cloud platform and that the cost will be evaluatedpickings this sharing into account.(2) Security device utilizationLA1 and PA is more efficient than LA2 because Internet traffic is concentrated in LA1 and PA and a statistically multiplexed traffic effect is expected.In addition to it, in PA, the add of physicalcomputing resources can be suppressed because virtualsecurity devices share physical computing resources amongmultiple users. Therefore, the cost of virtual security devicesfor each user will be trim.(3) Network transmission delayNetwork delay due to Internet traffic with LA1 is eternalthan that with LA2 and PA because Internet traffic to subcampusesis detoured and transits at the main campus in LA1,however, in LA2, network delay of Internet to sub-campusesis directly delivered from an Internet convert point on aWAN to the sub-campus, so delay is suppressed. In PA,network delay can be suppressed because the NFV and clouddata center can be selected and located near an Internetaccess gateway on WAN.While, the network delay for ICT application serviceswill be longer in PA than it in LA1 and LA2. Therefore, theeffect of a longer network delay on the quality of ITapplication services has to be evaluated.(4) Disaster toleranceRegarding Internet service , LA1 is less disaster tolerantthan LA2. In LA1, when a disaster occurs around the maincampus and the network functions of the campus go down,students on the other sub-campuses cannot access theinternet at this time.Regarding IT application service, IT services cannot beaccessed by students when a disaster occurs around the maincampus or data center. While, in PA, NFV/cloud DC islocated in an environment robust against earthquakes andflooding. Thus, robustness is improved compared with LA1and LA2.Today, systems capable of disaster recovery (DR) aremandatory for academic institutions. Therefore, servicedisaster recovery functionality is required. In PA, back upICT infrastructures located at a secondary data center can beshared with another user. Thus, no dedicated redundantresources are required in steady state operation, so theresource cost can be reduced. However, if VM migrationcannot be fast enough to continue services, active-passive oractive-passive replication have to be adopt ed. Therefore,reducing recovery time is required to adapt migrationrecovery to archive DR manageability more economically(5) ManageabilityLA1 and PA is easier to manage than LA2. Becausesecurity devices are concentrated at a site (a main campus orNFV/cloud data center), the number of devices can bereduced and improving manageability.There are three issues to consider when adopting the PA.Evaluating the access link cost of an NFV/clouddata center.Evaluating the network delay effect for ICT services.Evaluating the migration period for migrationrecovery replication.B. NFV and cloud control architectureFor the following two reasons, there is strong demand touse legacy ICT systems continuously. Thus, legacy ICTsystems have to be moved to NFV/cloud DCs as virtualapplication servers and virtual network functions. One reasonis that institutions have developed their own legacy ICTsystems on their own premises with vender specific features.The second reason is that an institutions work flow s are noteasily changed, and the same usability for end users isrequired. Therefore, their legacy ICT infrastructuresdeployed on a campus premises should be continuously usedin the NFV/cloud environment. In the proposed multicampusarchitecture, these application servers and networkfunctions are controlled by using per-user orchestrators.Figure 3 shows the proposed control architecture. separatelyinstitution deploys their ICT system on IaaS services. VMsare created and deleted through the application interface(API), which is provided by IaaS providers. Each institutionsets up an NFV orchestrator, application orchestrator, andmanagement orchestrator on VMs. Both active and understudyorchestrators are run in primary and secondary data centers,respectively, and both active and standby orchestrators checkthe aliveness of each other. The NFV orchestrator creates theVMs and installs the virtual network functions, such asrouters and virtual firewalls, and configures them. Theapplication or chestrator installs the applications on VMs andsets them up. The management orchestrator registers theseapplications and virtual network functions to monitoringtools and saves the logs outputted from the IT serviceapplications and network functions.When an active data center suffers from disaster and theactive orchestrators go down, the standby orchestratorsdetect that the active orchestrators are down. They startestablishing the virtual network functions and applicationand management functions. After that, the VPN is connectedto the secondary data center being co-operated with the VPNcontroller of WAN.In this architecture, each institution can select NFVorchestrators that support a users legacy systems.IV. EVALUATION OF PROPOSED NETWORK ARCHITECTUREThis section expand an evaluation of the access link costof proposed network architecture. Also, the test-bedconfiguration is introduced, and an evaluation of themigration period for migration recovery is shown.A. Access link cost of NF V/cloud data centerIn this sub-section, an evaluation of the access link costof PA compared with LA1 is described.First, the network cost is defined as follows.There is an institution, u, that has a main campus and nusub-campuses.The traffic amount of institution u is defined as followsdifferent sites can be connected between a user site and cloudsites by a SINET VPLS (Fig. 7). This VPLS can be dynamically established by a portal that uses the RESTinterface for the on-demand controller. For upper-layerservices such as Web-based services, virtual networkappliances, such as virtual routers, virtual firewalls, andvirtual load balancers, are created in servers through theNFV orchestrater. DR capabilities for NFV orchestrator isunder deployment.C. Migiration period for disaster recoveryWe evaluated the VNF recovering process for disasterrecovery. In this process, there are four steps. stair 1 Host OS installationStep 2 VNF image copyStep 3 VNF configuration copyStep 4 VNF process activa tionThis process is started from the host OS installation becausethere are VNFs that are tightly twin with the host OS andhypervisor. There are several kinds and versions of host OS,so the host OS can be changed to suite to the VNF. Afterhost OS installation, VNF images are copied into the createdVMs. Then, the VNF configuration parameters are adjustedto the attributions of the secondary data center environment(for example, VLAN-ID and IP address), and theconfiguration parameters are installed into VNF. After that,VNF is activated.In our test environment, a virtual router can be recoveredfrom the primary data center to the secondary data center,and the total duration of recovery is about 6 min. Eachduration of Steps 1-4 is 3 min 13 sec, 3 min 19 sec, 11 sec,and 17 sec, respectively.To shorten the recovery time, currently, the standby VNFis able to be pre-setup and activated. If the sameconfiguration can be applied in the secondary data centernetwork environment, snapshot recovering is also available.In this case, Step 1 is eliminated, and Steps 2 and 3 arereplaced by copying a snap shot of an active VNF image,which takes about 30 sec. In this case, the recovering time isabout 30 sec.V. CONCLUSIONOur method using cloud and NFV functions can achieveDR with less cost. We proposed a multi-campus equipmentvirtualization architecture for cloud and NFV integratedservice. The aim of this proposal is to migrate entire ICTinfrastructures on campus premises into cloud and NFVplatforms. This architecture would encourage academicinstitutions to migrate their own developed ICT systems located on their premises into a cloud environment. Adoptingthis architecture would make entire ICT systems secure andreliable, and the DR of ICT services could be economicallymanageable.In addition, we also analyzed the cost function, andshowed a cost advantages of this proposed architecturedescribed implementation design issues, and reported apreliminary experimentation of the NFV DR transa ction/

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.