![]() | submitted by ksalim87 to u/ksalim87 [link] [comments] |
submitted by goodlove20 to binaryoption [link] [comments]
![]() | submitted by emadbably to OptionsInvestopedia [link] [comments] |
![]() | submitted by asrclub to u/asrclub [link] [comments] |
![]() | submitted by sloomy155 to homelab [link] [comments] sloomy155's co-location homelab in 2021 Hello homelab (super long post, maybe longest post ever here?) Long time lurker, first time poster. In fact this is my first Reddit post ever(other than a couple of test posts to try to check the syntax). The only other social media I have an account on is Linkedin and that doesn't get used too much. My username doesn't mean anything it's just a random collection of letters and numbers. I didn't even know the term homelab until I came across this subreddit probably a couple of years ago now? I have read a bunch of posts here, and have seen homelabsales and DataHoarder as well. It has been interesting to see other people's perspectives. I never viewed my systems as a lab really, I have been hosting my own DNS/email/web/etc on the internet since about 1996. In my earliest days I volunteered at a super tiny ISP that one of my friends helped start and that was my first experience hosting live systems on the internet. That ISP closed down eventually and the remaining services ended up being hosted out of my home(though never was paid, I did it for the experience at the time). Here I am ~25 years later still messing around. If you just want pictures, and don't care about my stories then here they are, I have comments on each image on imgur, small fraction of what I cover below:
High level what I use my homelab for
The first phase (starting around 2001)Here are 11 pictures I chose from that era to share. I hope you understand I don't remember nor do I have documented most of the details behind the systems I had at the time. But I had a mixture of mostly tower and some rack mount systems. All x86 systems ran Debian Linux(having switched exclusively to Debian in 1998). And yes I even ran Linux on my desktop(at the time I believe I used AfterStep as my WM, though I did dabble in KDE when it was pre 1.0 in the late 90s?). I even played games on Linux. I played a lot of the original Unreal Tournament(online). Along with several other Loki games that I still have on CD (many of which I have never installed).I also have some non X86 boxes including Sun Ultrasparc, SGI Indy, and a couple of AIX systems too. All of which came from a company I worked at that closed their local office. They developed software for various Unix and Linux systems. I really wanted to take one of their HPUX and Tru64 servers but they couldn't spare them. These non x86 boxes got minimal use. My network was powered by an Extreme Networks Summit 48 10/100 2U switch which I bought off someone I knew online at the time. I think I bought it in 2000 or 2001, and have been a happy user of their gear for the past 20 years(never knew of them before this time). One of the tower servers was from a company called SAG Electronics, I remember today still reading their ads in PC Magazine perhaps, and drooling over their stuff. That wasn't my server it was my friend's who hosted it at my apartment on a dedicated DSL line for his websites. For some reason or another I became a fan of PC Power & Cooling power supplies. I wanted a quality power supply and they seemed to my untrained eye to take more care in making quality products. Maybe I'm wrong and was just lucky but have had good success using their power supplies over the past 20 years now, never had a failure. I have two of their PSUs in place today, one is about a decade old, the other maybe 3-5 years old(both are same model). My DSL connections(one for me, one for my friend, each was 1Mbps/1Mbps) came in using Quest DSL and originally the ISP portion was oz.net. My DSL connection had 8 public IPv4 addresses and I hosted my DNS, web, email etc. That lasted until about 2006 maybe 2007 or even 2008 I don't recall, when Oz's customers got sold yet again to another ISP. This ISP sent me notice saying all my IPs would be changing. That was a deal breaker for me. Changing my IPs when I host authoritative DNS was going to be a real pain. So I decided to go colo at that point. The second phase (starting 2006 maybe 2007 or 2008)I got a server from the company I worked at, this time I have some specs for you:
I was given a new network switch about this time for home, replaced my Extreme Networks Summit 48 which was a 2U switch from the late 90s, with the latest generation of that series(but still an older product) being a Summit 48si, still 10/100 (with gig uplinks, I had nothing that used gig at the time at home). It also used their latest operating system(for non chassis switches anyway), whereas the older switch could not be upgraded further. However it was 1U, and super loud. I did a fan modification to the system, replacing the stock fans with Sunon Maglev fans. I don't know if Noctua was around, I hadn't heard of them at the time. I came across Sunon somewhere and their marketing looked cool to me. I'm not a fan expert. This is the only fan modification I've ever done that I can think of anyway. I try to stay away from these kinds of changes as they more often than not seem to go badly for me(one such change described later). I'm fine with component level stuff but getting into wires and splicing and stuff, makes me uneasy. The mod worked fine. The switch was much quieter, far from quiet but bearable. The third phase (starting in 2011)While I was in transition between second and third phases(moving from Washington state back to California) I hosted my stuff inside Terremark vCloud Express which was a VMware based cloud provider at the time(later acquired by Verizon and eventually spun off or shut down I don't recall). It worked ok for my minimal workload but I really had to limit what I was able to do to keep my costs reasonable. I only used Terremark for a few months I think.Then I purchased a new server from a Supermicro supplier I had been using for many years. I don't have any pictures of this system, in fact I just took it to be recycled a couple months ago having retired it about 3 years ago now. This supplier had a $100/mo data center hosting package unlimited 100mbps bandwidth(and onsite support) which I was excited about. I do have specs for this system even though I don't have pictures:
System ran fine for a long time. I did have a couple of hard disk failures, other than that no failures. The 3Ware integration with vSphere was quite limited(I really like 3Ware going all the way back to my original systems in 2001). Meanwhile, at home I had a significantly downsized homelab, consisting of a single beefy(to me) Debian server, with a Soekris net5501 OpenBSD-based firewall. I purchased a refurbished HP xw9400 from their outlet store and added/changed some things to end up with these specs
I eventually moved the 3Ware controller and drives to a new smaller, quieter AMD Athlon system(pictures of this chassis are later as I reused the chassis and PSU for my Ryzen):
My internet connection was protected by a pair (eventually just one active) of Soekris net5501 firewalls:
All of that was protected by my first double conversion UPS:
The fourth [and current] phase (starting in 2017)I moved yet again, and so it was time to revamp the homelab again. This time(current location) was to the central valley in California, with peak temperatures much hotter than where I was in the bay area before. My little Athlon server worked fine the first year I was here. The 2nd year I decided I wanted to build something new, something that would have better cooling, and better able to handle the hotter temps (highest ambient temps I have noticed at my new server is about 92 degrees F). It's quite possible the older Athlon could do it, but I just wanted something with more airflow. So I built my server goliath. The name makes it sound really big, I guess it is not, I just picked it had a lot of storage (a lot being a lot for me, I have nothing compared to some on DataHoarder).When this server started out I moved my 3Ware card yet again to this system, with it's 4x2TB disks, then connected a pair of 6TB disks with ZFS to the motherboard controller, and a single SSD for boot. About two years ago I replaced the 3Ware with the LSI controller and replaced the disks and stuff, so I present the current configuration of the system as it stands now:
LSI Card TemperatureI know that LSI cards (mostly in IT mode is it?) are quite popular here, I purchased my LSI card on Ebay(seemed to be a reputable seller and the cost seemed reasonable at $389 for it being new), then purchased the battery pack on Newegg. I had issues getting them to talk to each other had to go to LSI support went back and forth for quite a while. Eventually they replaced both card and battery pack and things have been fine since. During that exchange I asked the support guy (who seemed super cool old school geek guy that I can relate to very laid back probably been there forever) about the temperatures of the LSI card. With my original PCI slot cooler(not the one above), same one I was using on my 3Ware card the LSI chip was hovering at about 60C I think it was. The LSI chip has it's own heatsink+fan as well. That seemed high to me (quite a bit higher than 3Ware was at). The support person said 60C is "OK" but he strongly advised not to let the chip go above 55C (he called it the "ROC" chip I think). Me wanting to be super careful I purchased a new PCI slot cooler with 2x92mm fans and put it right next to the LSI card. I also setup trending of the temperature using Librenms (works fine for home, wouldn't use it for work personally anyway). The chip never seems to hit above 55C even when ambient temperature is 92F. Normally will peak at 54C, but otherwise lower than that. Lowest I have seen is high 40s when the ambient temperature was upper 60s. The Linux web-based management software for the LSI card really sucks compared to 3Ware in my opinion. The CLI is powerful though.Anyway I wanted to call that out as I have seen several posts here and datahorder that has folks mentioning LSI chips running much hotter I think I've seen one or two claiming their chips running at 70C+. While they may work at that temp, I just fear it will lower the life of the component. So I am happy keeping mine closer to 50C. But I think it's stupid I need 2x92mm fans to do it. The card needs a better heatsink/fan combo design. I can only recall having maybe 3 RAID controller failures across ~600 or so servers I have managed in the last 20 years(probably 80% of those servers ran 3Ware cards). Favored CPUAlso wanted to give a shout out to the CPU, the Intel Xeon E3-1240L V5. This is a quad core 2.1Ghz Xeon, that runs at only 25W! Oh when I came across that CPU I wanted it so bad(originally wanted another slightly different model that had built in GPU but couldn't find it anywhere). It was SUPER difficult to find it. Several places claimed they had it in stock then I would order and would wait...and wait.. They would say they were waiting on their distributor. After weeks of waiting with no update in site I would cancel my order. Only place I found that had it was Dell. CPU was about $500, the most I had spent on a CPU in a long time. But I really love that its such low power usage but still a full fledged Xeon. I think the CPU is similar to what I have in my newer Lenovo P50 laptop (which uses i7 but has a Xeon option). The Xeon in the goliath system runs super cool as a result. It was quite a step up for video encoding as well vs my other systems at the time.ContainersIn an effort to keep things "cleaner" in this system(having fewer packages installed on the core server), I opted to setup containers with LXC and I run several such containers:
I guess I standardized on LXC for stuff at home, and VMware VMs for stuff at my colo. To ZFS..or not to ZFS..I know ZFS is very popular here. I have used ZFS off and on since probably 2009ish. It has it's use cases for sure. For my personal needs I feel more comfortable with hardware RAID and ext4. I manage servers for a living and do run fibrechannel and iSCSI SANs, as well as NFS and run other filesystems like XFS, and I do run ZFS in some cases at work(only use case is to leverage the compression on low tier MySQL systems). I deployed ZFS on my Athlon server (on top of 3Ware RAID) mainly for snapshots and ran with it for years. I really wanted snapshot support. But at the end of the day I never really used the snapshots. I learned the hard way(at work) the way ZFS behavior changes when it gets to be 80% full. For me that was the deciding factor not to use ZFS in my current build. My main filesystem is 93% full(with 780G free, and 3.5T free in the volume group), and a smaller SSD filesystem(reiserfs3 tons of small files) is 94% full. My filesystems run full like that for a long time. Could be that my use case the ZFS overhead running at 94% full wouldn't be a big deal. But whatever. With 3Ware before, and with LSI now I do have weekly scrubs happening at the controller level. That is good enough for me. EXT4 is the old boring reliable choice so that's what I went with. Most of my backups are done using a tool called rsnapshot(or manual rsync for file server data which doesn't change often). When I got this goliath system I had an idea to use a small ZFS filesystem with dedupe enabled to use with rsnapshot (instead of using hard links). This was with the original 3Ware RAID and 4x2TB disks in RAID 10. I even upgraded the memory to 32GB from 16GB just in case. The filesystem was going to be about 200GB in size I think. I don't know what the issue was but the performance was just terrible I was getting maybe 300 KILOBYTES per second to the filesystem according to iostat. Maybe some weird behavior with 3ware or something I don't know(it certainly wasn't the fastest controller but not that slow). So I quickly abandoned that idea and went back to rsnapshot with hardlinks. It's so very rare that I need to go back to backups for anything. Seems like less than once a year, maybe once every 2 years, and usually for something trivial.Video encodingA few years ago I decided to really up my game with backing up my movies and tv shows. In the end it turned into a big hobby. I have purchased more than 3,000 DVD and Blu Ray discs, probably more than 2,000 of which in the past 5 years. Backing them up and encoding and cataloging them is quite a tedious process. But at one point I got into a groove doing it and got a good process for getting it done accurately. I've never used any peer to peer stuff, no BitTorrent or anything like that. All of my stuff is purchased on disc and stored in CD binders. Originally I would rip and encode using a Linux tool called dvd::rip, which I believe is a Perl based GUI, this was before 2010 I think. It even had a cluster mode where you could distribute the encoding to multiple systems in parallel. I think the codec I used was Xvid at the time. Later h264 came out, and I became aware of Handbrake and have been using that ever since. First on windows, later on Linux. When I got this new Xeon it really boosted my encoding throughput. But I still had a massive backlog and was never able to catch up. Enter my first dedicated encoding system, my Ryzen 3700X:
I sort of expected to use it for SOMETHING other than video encoding, but in the end, when I don't have a lot of stuff to encode, I keep it off, because I'm afraid it may fry itself again. Less than a year after I bought it, it was encoding overnight and when I got up the next day it was down. I don't recall if the screen had anything on it or if it was black, but it was on, could not respond to ping. I turned it off(think I had to yank the power), and it would not turn on again. I tried many times to turn it on, it would not turn on. I removed the Ryzen board+CPU and put the original Athlon board+CPU back in and it powered up right away. So the PSU was fine. I tried powering on the Ryzen again a few more times and the board literally had a mini fireworks display of sparks or something coming out of one of the chips and a puff of smoke. I want to say I've never had a complete motherboard failure AT HOME in more than 20 years(perhaps never in my life). So I was shocked. I completed the RMA process with Gigabyte and they sent me a new board. Was hoping for a newer revision number indicating they improved the board but the revision stayed the same. Fortunately no other components were damaged. System has encoded probably a couple thousand things since without issue. But I'm constantly worried it will fry itself again. I have spent probably thousands of hours ripping, encoding, and cataloging my DVDs and Blu rays. I have just over 10,000 TV episodes and over 700 movies. I struggle hard to find anything else that may remotely interest me at this point, I've literally scrolled though thousands of titles trying to find something else but often come up empty now. Total space for that media is 7.6TB. I "cut the cord" in 2019, and made a gannt chart (WARNING: image size is 21,541 x 4276) recently of the TV series to try to see at what point I lost interest in cable TV. (side note: most gannt chart tools aren't geared for tracking 30 year periods of time, Visio handled it fine though image exporting was a bit problematic). I was a big time TiVo user for 15+ years but the last 3+ years of TV usage TiVo really wasn't recording much at all anymore and I struggled to find anything worthwhile to watch(even with every premium channel). It felt so weird to cut cable tv but I did it. Switched entirely to my home collection(which I had done already about 8 months before cutting cable). I do not do not use any streaming services. I measured the video encoding performance comparing my goliath system running the Xeon, vs the Ryzen, vs my Lenovo P50 running an i7 quad core processor on the same ~1GB DVD RIP in handbrake(probably slightly different versions) using the same encoding settings (very slow and same RF setting h264), all on Linux of course
I do my streaming with a defunct software product called TV Mobili. I'm probably the only one left in the world that still uses this, the version I have is from 2015. I'm a licensed user and it really works flawlessly for my basic needs of streaming to Western Digital Live TV (also defunct). I have 2 WD TVs in use, and 2 more as spare. I also have a few Rokus which I played with a bit but prefer the WD TV more(rokus sitting on a shelf now). I do not do any transcoding, everything is h264 1080p or below (my TVs are 1080p, no 4K). My firewall had to be upgraded as my Soekris boxes were only 10/100, and my new internet connection was 200Mbit or maybe 250. Soekris themselves seemed to be stagnant (they have since ceased all U.S. operations), and I came across the PC Engines APU2. This seemed like a real good replacement:
My switch started out as a basic metal 8-port Netgear, but earlier this year I replaced it with an Extreme Networks Summit X440-8t which I bought on Ebay. It was new, as in never having been used(there is an command in the software to show how many hours the switch has been in use to validate) and the price was great so was real happy to get it. It is fanless, it idles at only 19W, and has basic layer 3 functionality. Total of 12 ports, 8 RJ45 and 4 SFP, all are 10/100/1000, no 10G here. It does run hot to the touch but always well within specs, I think the hottest I have seen it is 55C, and it's normal operating range is up to 68C, currently 45C. This layer 3 switch came in handy later when I wanted to configure some wifi access points for my job before taking them to a brand new office(I have been WFH since about 2014). I had no experience working with these APs but was able to easily create the same VLANs they would use at their destination on my network and just enable routing between the VLANs and off I went. I upgraded my HP xw9400 workstation to 6-core CPUs and 12GB of memory, and added two more DVD drives(helped get through my backlog at one point I probably ripped 40+ DVDs in a day across 4-5 drives), replaced boot disk with SSD. It runs Windows 7 today, and stays off 99% of the time. Only thing I really use it for is dealing with certain Lionsgate Blu Ray movie titles. This is all protected by a new(at the time) Cyberpower OL1000RTXL2U double conversion UPS(no expansion battery pack, no network card), fan runs all the time, very loud took a long time to get used to. This UPS also protects most everything else in my home office including monitors, laptop, accessories everything(not air filter system or paper shredder though). I have been using Network UPS Tools(nut) for 20 years, and I continue to do so today with my current UPSs. I have a Cyberpower PR1500LCD in my livingroom protecting all of my stuff in there. I have no regular computers in my livingroom anymore, so I came up with an idea earlier this year to use one of my Soekris boxes that have been sitting on a shelf for years. They only draw about 5W of power at idle. Just because I wanted to, I setup one of the Soekris boxes with OpenBSD again and use it only to monitor the UPS (just to see the load). Certainly cheaper than buying a network monitor card for the UPS. Co-location in 2021Still part of the same "phase", but I think it deserves it's own section as there's quite a bit of stuff here.These are probably the coolest of all of the recent pictures, at least to me. About 18 months ago I purchased my first Extreme Networks Summit X440-8t switch from Ebay(was new-ish, had 1 hour of usage recorded by previous owners). I installed that switch this past July(so now I have two of these switches). Completely overhauled my network setup with the switch, and used almost every port in the process. But that's ok I don't plan to add anything else(space and power limited). Currently I have two rackmount systems, I'll start with the oldest of the two, a Dell R230 I bought new in late 2018, have upgraded it a bit since here is current config:
Less than a month ago I installed a new member of my family a refurbished Dell R240 from Dell's outlet store (wish it had the LCD those are cool) in my rack. Though it's mainly there as a backup, I still have on site support with Dell for the R230(haven't had to use support yet), the R240 needs more RAM and SSDs before it can be a real backup but I wanted to get the deal while it was there.
A couple of years ago I added a Terramaster F4-220 NAS. Originally I had 2x8TB disks in the Dell R230 for my file storage, but decided to deploy this dedicated NAS and put only SSDs in the Dell:
This past July I added an Intel NUC that I purchased on Black Friday last year and set it up as an ESXi server as well:
I have an idential PC Engines APU2 firewall at my colo. That's it, that's my 20 year history of home labbing. Hope it was a worthwhile read. (Ran into reddit's 40,000 character limit so had to cut some things) (I'll check back later today in case anyone has questions/comments) |
MegaCli -PDOffline -PhysDrv [E:S] -aN MegaCli -PDMarkMissing -PhysDrv [E:S] -aNWe physically swapped in the new drive and the controller sees it in firmware JBOD but we cannot get it re-added to the array.
MegaCli -pdInfo -PhysDrv[252:7] -a0 Enclosure Device ID: 252 Slot Number: 7 Enclosure position: N/A Device Id: 32 WWN: 5000C500CB7BCA70 Sequence Number: 2 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 279.396 GB [0x22ecb25c Sectors] Non Coerced Size: 278.896 GB [0x22dcb25c Sectors] Coerced Size: 278.875 GB [0x22dc0000 Sectors] Sector Size: 512 Logical Sector Size: 512 Physical Sector Size: 512 Firmware state: JBOD Device Firmware Level: N004 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5000c500cb7bca71 SAS Address(1): 0x0 Connected Port Number: 0(path0) Inquiry Data: SEAGATE ST300MP0006 N004WAE2HK4W FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 12.0Gb/s Link Speed: 12.0Gb/s Media Type: Hard Disk Device Drive: Not Certified Drive Temperature :27C (80.60 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Port-0 : Port status: Active Port's Linkspeed: 12.0Gb/s Port-1 : Port status: Active Port's Linkspeed: 12.0Gb/s Drive has flagged a S.M.A.R.T alert : NoI can see the status of what it thinks is missing
MegaCli -Pdgetmissing -a0 Adapter 0 - Missing Physical drives No. Array Row Size Expected 0 2 1 285568 MBBut when I try to replace missing I get an error to the effect of "invalid firmware state"
MegaCli -PdReplaceMissing -PhysDrv [252:7] -Array2 -row1 -a0We've also tried
MegaCli –PDMakeGood -PhysDrv[252:7] -Force -a0 2:19 MegaCli -PDOnline -PhysDrv [252:7] -a0Without any luck. I've always used 3ware controllers in the past, and replacing a failed drive via tw_cli was always trivial but this MegaRaid is extremely frustrating. I suspect the core issue is the array seeing it as JBOD, but I don't know how to fix it and unfortunately this server is in a datacenter several hours away so I haven't been able to reboot it and check via the controller interface during startup.
![]() | submitted by savoryandpartners to arabic [link] [comments] https://preview.redd.it/mox9smyxdis31.jpg?width=1200&format=pjpg&auto=webp&s=8e2f2b3650455543e9b1be284d61a36dc7702442 التقى سفير كومنولث دومينيكا الجديد، هوبير ج. تشارلز، مع سيفوري أند بارتنرز في مكتب دبي لمناقشة الأمور المرتبطة بافتتاح السفارة الجديدة في أبوظبي و القنصلية العامة في دبي. و ناقشوا كيف سيستفيد مواطنوا كومنولث دومينيكا المقيمين في دبي و المنطقة من السفارة و القنصلية الجديدة. وقال السفير: "ساهم ارتفاع الطلب والاهتمام من دولة الإمارات العربية المتحدة بشكل كبير في العلاقات الثنائية بين الإمارات و دومينيكا وفي إنشاء سفارة دومينيكا في أبو ظبي و القنصلية العامة في دبي".ستلعب السفارة دوراً مهماً في تسهيل الخدمات للعدد المتزايد من مواطني دومينيكا المقيمين في الإمارات العربية المتحدة والشرق الأوسط، مثل تسليم جواز السفر و تجديد جواز السفر و وثائق عدم الممانعة و توثيق المستندات. أعرب العديد من المستثمرين الذين حصلوا على جواز سفر دومينيكا و تقدموا إلى البرنامج أثناء وجودهم في دبي عن تقديرهم للسفارة الجديدة في دولة الإمارات العربية المتحدة. كما أشار السفير هوبير ج. تشارلز المحترم إلى معرض دبي إكسبو ٢٠٢٠ كفرصة تسويقية كبرى لدولة دومينيكا و برنامج جنسيتها عن طريق الاستثمارالناشط منذ عام ١٩٩٣. "يعد معرض دبي إكسبو ٢٠٢٠ فرصة رائعة لتسويق البرنامج، وكسفارة نريد أيضاً تشجيع المواطنين الدومينيكان في دبي على المشاركة في الأنشطة التي ستقام في السفارة خلال المعرض" - صرح سعادة السفير هوبرت ج. تشارلز. "لقد كنا متحمسين للغاية للتأكد من أننا داعمون بقدر الإمكان لمواطنينا، ليس فقط كحكومة ولكن كمواطنين أيضاً". "على مدار أكثر من ٢٠ عاماً، التزمت الحكومة التزاماً تاماً بالبرنامج و أدخلت تحسينات كبيرة و موضوعية عليه"قال سعادة السفير هوبرت ج. تشارلز المحترم إنه بصرف النظر عن وجود برنامج للمواطنة يركز على جلب الاستثمار إلى الجزيرة، فإنهم "يريدون تعظيم التفاعل بين المواطن و البلد إما عن طريق الاستثمارات أو السياحة أو زيادة المعرفة بالبلد". "نريد أن نشجع أهمية وجود مستثمرين جدد في الجزيرة، كما نريد مواطنين أكثر انخراطاً في البلد ".يجذب معرض دبي إكسبو ٢٠٢٠ اهتمام الآلاف من المستثمرين من جميع أنحاء العالم بإمارة دبي. حيث ستكون هذه فرصة مثالية لعرض البرنامج كواحد من أكثر الخيارات الموثوقية للحصول على جواز سفر ثانٍ من دول البحر الكاريبي. نوقشت جوانب أخرى تمس البلاد و نموها الاقتصادي خلال الاجتماع، بما في ذلك الافتتاح المقبل لمطار دولي جديد لمجموع ثلاثة مطارات جدد و ميناء جديد. توقع عزيزي القارئ المزيد من التفاصيل قريباً. تم تأسيس برنامج الجنسية عن طريق الاستثمار في عام ١٩٩٣ من قبل حكومة كومنولث دومينيكا، وهو أحد أقدم البرامج الفعّالة للجنسية الاقتصادية في العالم. على مدار أكثر من ٢٠ عاماً، ظلت الحكومة ملتزمة تماماً بالبرنامج و أدخلت تحسينات كبيرة و موضوعية عليه. |
$ sudo lspci -v | grep -A 14 -i 3ware 04:00.0 RAID bus controller: 3ware Inc 9750 SAS2/SATA-II RAID PCIe (rev 05) Subsystem: 3ware Inc 9750 SAS2/SATA-II RAID PCIe Flags: bus master, fast devsel, latency 0, IRQ 27, NUMA node 0 I/O ports at 7000 [size=256] Memory at df760000 (64-bit, non-prefetchable) [size=16K] Memory at df700000 (64-bit, non-prefetchable) [size=256K] Expansion ROM at df740000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [68] Express Endpoint, MSI 00 Capabilities: [d0] Vital Product Data Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [100] Advanced Error Reporting Capabilities: [138] Power Budgeting > Kernel driver in use: 3w-sas Kernel modules: 3w_sasFrom my troubleshooting, the tw_cli is the way to configure the RAID. Using this utility, it cannot find the controller.
$ sudo ./tw_cli //ubuntu> show No controller found. Make sure appropriate AMCC/3ware device driver(s) are loaded.Here's to hoping someone knows more than myself, Cheers!
أكدت نتائج الدورة الاولى في الانتخابات الاقليمية الفرنسية فوز حزب "الاتحاد من أجل حركة شعبية" برئاسة نيكولا ساركوزي. وكرست تقدم حزب "الجبهة الوطنية" وتراجع الحزب الاشتراكي الحاكم. فهل هي نهاية الثنائية الحزبية في فرنسا؟ بعد تطبيق المصادقة الثنائية ، يمكنك استخدام الأمر npm profile get لتأكيد أنه قد تم تعيينه. مثال: تعيين قيم الملف الشخصي من CLI بعد تمكين 2FA . بمجرد تثبيت 2FA ، ستحتاج إلى إدخال OTP للأوامر المتعلقة بالأمان. افتح cli ملف مع ملف ماجيك. الملفات الثنائية غالبا ما يكون لفتحها في برامج محددة، ولكن لكل شيء آخر، يمكنك استخدام ملف ماجيك. يمكن لمشاهدي البرامج العالميين مثل فيل ماجيك فتح مجموعة متنوعة من كامل الخيارات الثنائية العربية Sunday, May 15, 2016. جورج سوروس - يختار أرباح أعلى + أحتاج إلى أداة لمقارنة الملفات الثنائية 2. اعتقد انها CLI ، فمن السهل استخدام! باستثناء ربما كان علي أن أبحث عن مفتاح RET هو xD فإنه يتحول إلى أنه مفتاح Enter. وأضاف 06 ديسمبر 2015 في 02:49, المؤلف Ghasan Al-Sakkaf, مصدر. يسمح لك VBinDiff بتجميد
[index] [5982] [3994] [1257] [6018] [5847] [14749] [7188] [4250] [9676] [12608]
test2