We use cookies to personalize content, interact with our analytics companies, advertising networks and cooperatives, and demographic companies, provide social media features, and to analyze our traffic. Our social media, advertising and analytics partners may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services. Learn more.
I’m currently working on a pretty cool project (shh, it’s a secret!). It’s another one of those “leave some sections at full height but cut the majority away” ideas. So there are areas of the project that have a 0 depth inside areas that have 100% depth.
Thing is, when I generate the toolpaths, it ignores the 0 depth areas and generates toolpaths inside them anyway.
So what gives? Is this part of the new backend architecture that they’ve started? Some bug in the tooling logic? I’ve done several projects before that had this type of design element with no problem. The image preview looks fine, but the toolpaths are borked.
Just wasted an afternoon trying to import SVGs with cutouts and having all sorts of problems. Was planing a major bit of work this weekend but currently scuppered. I’m guessing a bug introduced when the tabs work was added?
If you guys can fix this please ASAP I for one would really appreciate it, even if it means rolling back to pre-tab placement version. I would also like to see some more formal test and release provision put in place, maybe things like version numbers, release notes in the app so we know where we stand?
Point of interest. I can do what you are looking for all day long with CamBam. Yes I had to spend $149.00 after I used up the 40 free projects but the program has paid for it’s self.
I’m sorry you encountered this problem. We carve projects ourselves every week, and I know how frustrating it is to waste time and material on failed projects–especially when it’s not your fault.
It turns out that this regression was not caused by the interactive tabs feature that was released yesterday. It was introduced by a tool path generation improvement we also pushed yesterday to always carve fills from the inside-out. (We’ve been doing a lot of experimentation with speeds/feeds and tool paths recently and learned that always taking an inside-out approach leaves a better finish around the edges of fills.)
As far as formal testing goes, we do have an extensive suite of automated tests that run on every “build” of Easel. It is a requirement that all of those tests pass before a build can be merged and deployed, and it is required that every new feature have automated tests that verify its functionality. While these tests cover virtually every feature of Easel, their coverage of the correctness of toolpath generation could admittedly be expanded considerably. This is something that we will work on improving going forward.
In addition to all tests passing before merging a feature, every feature must also be code reviewed and signed-off on by at least one developer who did not build the feature (we also often have 2 developers pair on a feature together, and a 3rd developer review it). This also helps prevent defects and ensures that multiple members of the team are familiar with all aspects of the code.
I understand where you’re coming from on the request for release notes and version numbers, as well. This is something we can explore. Since Easel is a web application, it’s updated extremely frequently, certainly much more frequently than, say, your web browser, or an app on your phone. That being said, every change is tracked in git, and every feature merged has an associated description from its pull request. We could add a step to the deployment process to post release notes for every change or every significant change. It’s something we’ll discuss internally.
Again, I’m sorry you encountered this issue. We take problems like this very seriously and strive to avoid them. Our goal is to make Easel the best CAD/CAM/machine control tool in the universe. A fix for this problem is forthcoming.
Congratulations on implementing automated regression testing, I manage large software development projects and spending the time/money to develop a comprehensive suite of automated test cases is really hard to get done, especially when you are just trying to get a new product “out the door”.
And for taking the time to explain your development process. Doesn’t sound dissimilar to how my team works if I’m honest.
I mentioned versioning and release notes as it was something we implemented to ensure that our users could feedback exactly what version they were running at times of problems because we are pushing new versions sometimes daily and users don’t always close their browsers or cache flush as they are full time operation.
Because our users (actually work for the same company) are happy dogfooding on new features we provide the edge version on the default URL with a versioning dialog that allows the users to use any of the previous four versions just in case they find a showstopper (they work 24/7/365 but we work 9-5 weekdays, I wish).
So they can always rollback themselves and are able to do a side-by-side comparison. With release notes attached to each version, they can see what we think we changed in plain English (rather than our often cryptic PR comments).
I’m not suggesting you have to go this far, but with the elapsed time between code merge and deployment usually much shorter these days, catching those bugs that unit testing or pair programming doesn’t catch we find is now less likely to happen and once it’s deployed you are playing rollback. We have an embedded tester as well who tests all new features and their effects on established features but we are all human as are those who write the tests.
Doesn’t matter how hard you try of course, things like this get through and your users are going to beat you up about it, usually on a Friday
Sometime you see a company admit to an issue and fix it pretty quickly. This is a mark of a good company in my opinion. It is easier for said company or a representative of that company to use words like “we” and “our” to maybe spread the blame load around a little. When a they admit to the issue, fix it, and then the person who made the mistake owns up to it publicly… this, I would say is the mark of not just a good company but a great company.