These are the kinds of things we love to hear about.
Thank you for sharing this experience.
These are the kinds of things we love to hear about.
Thank you for sharing this experience.
Not sure if you ordered the new hardware yet.
It would be nice if you could just install another agent or two in your neighborhood.
Then you could confirm better if it's your router or something on the providers side.
Mike
Ok, keep me informed. I sure hope that's what the problem is.
BTW, another way to help confirm is to install another agent (or more) in the area. That will tell you if there is a common problem that others are seeing as well. If others in the area are also seeing problems with that second hop, that tells you it's not your router.
I wish there was another way than buying a router especially if that is not the problem.
I assume you've looked at the routers logs to see if you can spot anything?
Hi,
Well, without any physical access, here's what I think.
The majority of the issues are definitely on hop 2. Hop 2 seems to be the WAN side of your router.
The WAN side of the router seems to change IP's quite a bit which means it's getting a DHCP IP upstream.
I assume you don't have a fixed (static) IP at that location.
To me, it's either the local router or the interface that your WAN side is connecting to.
When it says "Hop Down" does that mean it has reached that point but can't get any further?
Or its having trouble communicating at that point.
The agent keeps track of hops on a very regular basis. When a hop goes away, it retains that information and puts that into the path as the downed hop. Meaning, if it was able to reach hop 5 for example but something happens to hop 5, it knows that it cannot reach hop 5 anymore but uses it's cache as the information for the missing hop.
So to answer your question, if the agent says hop down, it means the hop that 'used' to be there but is suddenly down. Looking at your reports, it looks to me like hop 2 which seems to be the WAN side, the ISP side, keeps going down or is configured in some way that it messes up your routing.
Personally, I would suspect the router since that's what it looks like. If you have a spare, while it's time consuming, it seems worth the effort to try another one. If that changes nothing, I'd have to say there is something that is not configured properly on the ISP side.
Got it.
The new agent seemed to have the same private IP as the 507 and I wanted to make sure you had removed the old one if that was the same PC.
It's interesting that the DNS servers aren't showing up. There was a bug some time back but it's been fixed.
Now, back to your question.
Upon a first look and based on your input, the second hop seems to be the ISP on the WAN side of your router/modem. I think we can confirm this especially because you say you're using 192.168.x.x and have no 10.x.x.x network/IPs on the LAN side. Meaning, your router is not running multiple networks on the LAN so it's safe to say this is outside of your LAN.
We are constantly trying to tune our classification methods to try and determine what is in and outside the LAN but it's pretty hard to do when both sides are private IPs. We are considering adding a function that would allow members to manually intervene and configure what they know to be fact so that the classification can be better. The concern with that is non technical people might end up messing up their reports and not seeing the right info. It seems better to have members ask us to intervene, at least for now. Maybe with more input, we'll find the right balance.
So, as I see things in terms of hops, it's like this.
1 192.168.1.1 LAN side of your router/modem
2 10.20.x.x WAN side of your router/modem (your reports should be saying 'with your provider)
3 60.x.x.x TPG Telecom Limited
One question for you. Does your provider have two business names, like maybe one company bought the other one out?
We seem to find both TPG Telecom Limited and COMINDICO-AP SOUL Converged Communications Australia, AU.
We'll be talking about and looking at the classification code again today to see what else we can do. We keep tweaking things to get better results. These kinds of questions are how we can better the service and appreciate your taking the time to post.
We found the DB error which is now fixed.
Your 31507 reports are back online but the agent continues to be offline.
We looked and it is not communicating with our network at all. Nothing for at least four days.
One odd thing is that both your 31507 and 31508 show no DNS servers.
Maybe check or re-install it as it seems the Windows agent service may not be running.
If you do re-install on the same PC, make sure there is no previous copy already installed and if there is, remove it before re-installing.
Not sure I'll have time to deal with the hops issue tonight but will post if I do otherwise will post in my tomorrow.
Great, thanks for confirming that.
Right now, we're working on finding the DB bug that you've submitted. If you could hold off on contacting the ISP, I can help you with that as soon as we solve this other problem. Just wanted to acknowledge your post.
Hi Sam.
First, can you share what the agent ID is so that I can go look at its reports.
We received an email from someone named Sam earlier, explaining that they were seeing a database error. I looked into this and sure enough, whom ever reported that let us know about a bug we didn't know about. We appreciate bug reports so that we can fix them.
Once you confirm the agent ID, I can check out the data and then respond.
Mike
Sure, lets give it some test time. Once you confirm all looks good, we could add it into the ARM list.
We would like to get to a point where we aren't fully saturating the link and only looking for low bandwidth. Full saturation testing is fine if you're looking for top speed but if you mainly want to know that you have usable bandwidth, since you know it is usually a shared service anyhow, that seems more Internet friendly.
We do have something that lets you limit the speed of the test that we were working on but it is not in the dashboard.
We recently added a low bandwidth test for our business customers which specifically does not saturate the link but instead, does a short, low data test to get some idea of usable bandwidth. Members can set a low bandwidth threshold to receive notifications if bandwidth falls below a certain level.
For example, say you are monitoring a 50Mbps connection and you know you're ok down to maybe 5Mbps. You could set the alert to 5Mbps and if it dips below that, it would send you an alert.
You may have noticed that the agents list, dashboard and other things are slowly changing. This is because of a big code unification project we have going on in the background. I'll be posting about it shortly.
Mike
Hi,
Thanks so much for the great tutorial. We would be happy to add the Orange device to our list if you feel it seems pretty reliable so far.
I don't think we have any special installation notes for the Pi devices like we do for the Linux version.
Do you mind if I turn this information into a post? Either a post on the web site or a link to your post here.
Any luck with that Shelly? That's one of the devices we're all curious to find out about.
Mike
Great question.
The main point of the article is that there is no way to get anything conclusive. It's just another test, another stat that needs to be taken into account with more information.
Speed testing is definitely useful but is a moment in time test using shared bandwidth where the test can go across networks we have no control over and have no idea how they are set up. More importantly, most of these tests seem to terminate on CDNs which are optimized to cache data mainly for streaming and other media.
That's why we used a big file so that the transfer could settle and we could (should) get a fairly sustained speed going. Even when using iperf which is a more real world test, it is inconclusive because the test travels over networks we don't control.
The article is mainly questions like why did the transfer drop down to kilobytes when it's a 50Mpbs pipe that is barely being used? I don't think we are implying anything about the provider but pointing out that we simply don't know since we don't control their network. We can make some assumptions by testing different kinds of data to different locations but in the end, it is inconclusive.
Yes, a web site could be slower but 30 seconds is way up there in terms of time to render pages. The point there was simply to see if there was something going on with the file transfer specifically or the overall bandwidth. Meaning, maybe the sustained speed caught the attention of an application manager which automatically tried to keep the transfer at a certain speed. No idea.
One of the more interesting things during the testing was that we checked the hops to the destination and could see that the provider actually had something on the edge of the data center we were testing to. Meaning, our location, the provider, level3, back to the provider then into the DC. It seemed odd that we could never get anything close to 50Mbps.
I'm getting about 2-300Mbps on my wired speedtest, it's still useful in that i will spot a relative degradation but i'm pretty sure if we used
iperf or qperf to test between our connections we'd almost always get 1000Mbps.
When you say test between your connections, do you mean with the same provider or would those tests go across multiple network owners to get from source to destination. Testing between servers in a data center or even to another data center managed by the same org usually always shows the correct throughput but that's because in these cases, the network owner is under obligation to make those speeds can handle all of the traffic required. Cable companies don't work this way, it's a best effort service that has 'acceptable' ranges to cover all their needs. I cannot speak with authority since I've never been part of a cable company but I've spent years fighting with them to get what we pay for and to get them to fix problems for us and customers when we offered ISP and MSP services.
We did a lot of this kind of testing and while speeds can remain around what ever they tested at, most of the time, it's up and down as expected.
My guess is the speedtest uses a very small file and so on my big connection it's measuring the time taken to establish the connection
too which is why it nets out at about a quarter actually bandwidth?
In fact, OTM tests against fast.com which has an interest in making sure that consumers are getting the bandwidth they are supposed to be getting. If you are seeing what you think is a set up time as a delay, it's possible since it has to spawn a process, hit at least three servers, calculate the results, etc. The browser test is instant if you go to fast.com.
However, hardware agents that we sell do something different, similar to what you said. They run a shorter, smaller test to get a quick average.
Unlike the usual speed testing that people are doing, we aren't really interested in fully saturating everyone's connections over and over again.
The main purpose of our speed test is not about finding out what your maximum speed is but if you have usable bandwidth for your requirements.
Our speed testing tools are still in their infancy as we try to find the best way to give the most useful information without fully saturating and using up all kinds of data since many are on data plans. The speed testing will end up being to and from our network only at some point when we can find the best way to get useful results. We've also tested limiting the speed to fast.com so as not to be using up users bandwidth/data or wonderful fast.com service.
BTW, there are a lot of interesting blogs/articles about fast.com and why they built the test and made it download only. it's quite interesting.
I hope this helps clarify the bit of testing we did that day :).
I believe you experienced that when you first joined, suggesting linked files etc which is partly what we were doing and unable to keep track of.
We tried offering a direct download version but it's too complicated to keep track of.
The reason for so many updates lately is mainly trying to get to a final, stable version that we won't be changing again for a long time.
Regards,
Mike
Hi,
We are still monitoring this situation. You'll have to respond here and let us know when your agents are back online and then we'll be able to upgrade them.
Regards,
Mike
Hi,
And this is why forums will be so much nicer than form/email support :).
It was updated to v1.58.2002 which is why the above version is no longer available. We failed to link the new files.
Thank you for letting us know, try it now, it should be fine but FYI, you'll be getting the latest version.
Maybe the loss of net neutrality means opportunities
Let’s be honest, the main factor behind eliminating net neutrality is pure greed and control.
First, Internet Service Providers (ISP) like any other business, sometimes prefer not to invest in better infrastructure until they either absolutely have to such as seeing a competitive reason for doing so or when there are simply too many complaints.
There is little accountability for outages (until now) that end up costing consumers, businesses and organizations and outages are perhaps at times admitted to when they are simply too large to hide. Outages cost consumers and businesses in countless ways including the amount of time we have to spend on troubleshooting and calls to provider support lines.
For an ISP in an area with limited competition, there’s not much reason beyond cranky customer support calls, to upgrade the infrastructure. Without rules such as net neutrality, it becomes easier to throttle back access to high bandwidth sites or users who go beyond an acceptable usage level but worse, it can be quietly imposed which isn’t fair to consumers.
Now the talk is that providers could come up with premium packages for increased web access or may end up forcing online services to pay fees which in turn will trickle back down the consumers. As always, these moves end up costing us all, not only in terms of creating monopolies that limit our choices but also in that as these companies grow, they keep trying to make it harder for new initiatives to come to life, offering competitive services.
As consumers, we hold much of the blame for our lack of interest in this issue, hoping that others will stand up and fight for net neutrality while we’re busy streaming movies on our phones and hanging out on social media. It’s easy to think it won’t happen until it does.
So, what should we be worried about?
No rules in place not only allows providers to throttle bandwidth or charge more but it also opens the door to being paid to limit or block access to sites and information. It may start small or not seemingly affect us initially, but at what point do we create a tiered system of access to the Internet where everything is micro controlled by multiple companies before you ever get to it.
Imagine an ISP being paid to block information and in turn pays search engines and others that control access to certain areas or demographics? Things could get out of hand very quickly.
Having the ability to prevent people from voicing opinions, enjoying freedom of speech and sharing information should not be overlooked. While it may not be in the ISP’s interest to block customers, if there is enough money being paid to make these efforts worthwhile, these things could easily be done.
Community Broadband
So, where does this leave those who would like to buy services from companies that aren’t going to earn their keep through such methods? The answer could be with cities, communities and or consumers working together to bypass local Internet providers.
Telcos and broadband companies do all they can to prevent competition in their areas which is something will may need to be addressed. Assuming palms are not being greased in local government, new network operators can work toward getting permissions and what ever permits may be needed in order to establish their own private Internet offerings.
Citizens may also need to stand up to possible push-back from their own city officials and typically that can be done by voting those who don’t help out of office.
The Internet is just one network
The Internet is just a network of interconnected networks and machines. There have been many propositions and even efforts such as mesh networks and others to build new networks. Many already exist. Of course, we dilute the incredible amounts of information we have access to by going down this road and eventually, someone will tie all these different networks together too.
Before all that happens, the first step is to get control of what we already have and bypass the local operators if they are going to play the access control game. There needs to be a movement in every community to starve these companies of the easy money they make on consumer grade best effort delivery Internet services while at the same time, creating an opportunity for new jobs. We could easily get back to the early days of smaller, local ISPs that work very hard to keep their customers happy.
Community based Internet services
This may not be an option in all areas but there are many locations which are well connected enough that a person, groups of people or city could purchase third party connectivity from any number of brokers/vendors which come with SLA.
Many cities are now bundling their own fiber networks when construction begins in new neighborhoods or have started doing so regardless. Copper can be provisioned with the local Telco and wireless can also be used where direct connection is not possible. Wireless speeds are continuously getting faster and is much less expensive than dealing with digging up the streets.
Net neutrality, one area at a time
Technicians could work out what would be required for X number of people using services such as NetFlix and others to come up with a reasonable per user bandwidth, price and of course overall throughput.
Data plans are a money making opportunity for ISPs which private services could potentially eliminate. Making sure that everyone gets what they pay for would also be based on certain criteria for a customer base. Heavy users would either have to pay more as usual or continue on with their current ISP.
However, instead of access to information and sites being limited in certain ways, everyone would have full and open access without concern that the local provider is somehow limiting them. No quiet throttling, no unfair slowdowns because there isn’t enough throughput to handle all of the customers. Best of all, no provider playing around with what you can or cannot get access to, everything would be above board and transparent.
Well, that’s the hope at least and it starts by having the right people in place with such efforts.