Even if someone somehow fixed all the geometry limitations that come with PTEX, it is still missing a 2d representation. I think having that adds a lot of flexibility to the texture painting process.
Forum Posts
I've never pushed PTEX anywhere I've worked (and I have never seen it recommended by anyone).
I made a tweaked version for work, which basically removed pyside as the image reader and replaced it with image magick. I highly recommend this if you have access to image magick shell command. I was getting crashes on thumbnail generation using the pyside technique.
In my opinion the bookmark part of the tool is the most flawed, and the browsing UI is clunky. I've been toying with the idea of overhauling the tool.
In my opinion the bookmark part of the tool is the most flawed, and the browsing UI is clunky. I've been toying with the idea of overhauling the tool.
campi wrote:
OpenSubd will only help you if it's implemented in a sensible manner.
Because you can implement it in a way where you are still subdividing the whole model instead of only what is on screen.
Which brings you right back to your original problem that you have huge geo data in your project. A 200k object subdivded 3 times is already 12.8 Million polys.
And 200k is not really that much.
So if openSubd it needs to be done viewdependent not blanket subdivision which I can do myself in Maya and don't need Mari for.
If Foundry can't get adaptive subdivision to work, then its probably not worth implementing.
http://graphics.pixar.com/opensubdiv/do ... ubdivision
Dare I mention it....
....
....
.............
Transparency.
(Gasp!)
....
....
.............
Transparency.
(Gasp!)
Wow, 60GB is just asking for a corrupted file :)
Remember that Mari does it garbage collection on load not on save, so the steps would be:
1. Un-cache
2. Save and close
3. Open
4. Save and close
5. Archive
Remember that Mari does it garbage collection on load not on save, so the steps would be:
1. Un-cache
2. Save and close
3. Open
4. Save and close
5. Archive
Ndivia Quadro K4200 4 GB vs. Geforce GTX 970/980
in Mari ForumsMari ArtistsMari News & General DiscussionI've used both, get the GTX 980 4GB.
campi wrote:
yeah it is an absolute dog. Weirdly it isn't even the viewport that is directly to blame but the UI calling the viewport constantly without reason. That slows everything
Pretty much any action in the UI will trigger a shader re-compile, even re-naming layers or channels or selecting something without even doing an operation to it. I think it also uses RGBA data for masks, which must inflate gpu processing times. I wish there was more ways to force optimization via API if the Foundry doesn't want to do it.
The underlying problem is most artists I know work messy and inefficient, which compounds the performance problem.
I forgot to mention, I would like to see OpenSubdiv implemented in Mari for the next release. Useful and a selling point.
Ok, cool. thanks!
I could probably make wish list 10 pages long. But, my top 5 would be:
1. Improved Performance - Mari is slow at most things.
2. Improved Python API - Python API seems hastily put together.
3. A Path/Vector tool - 5 years of every single user asking for this, and still nothing.
4. A price drop - Mari is not the the holy grail of software.
5. Improved Post Process Filters - Ideally re-written from the ground up.
My guess is that none of these will be part of Mari3, but we will probably see some new tech or features that will become selling points.
1. Improved Performance - Mari is slow at most things.
2. Improved Python API - Python API seems hastily put together.
3. A Path/Vector tool - 5 years of every single user asking for this, and still nothing.
4. A price drop - Mari is not the the holy grail of software.
5. Improved Post Process Filters - Ideally re-written from the ground up.
My guess is that none of these will be part of Mari3, but we will probably see some new tech or features that will become selling points.
Is it possible to return problems with geometry using the mari.geo.load or mari.projects.create functions?
If I load a geo with bad UVs manually, it will alert me, but I don't see any option to return this error via python.
If I load a geo with bad UVs manually, it will alert me, but I don't see any option to return this error via python.
I'm hoping we get a ton of shading and rendering features but no actual texture painting features.
is the extra vram of a titan x worth it for Mari?
in Mari ForumsMari ArtistsMari News & General DiscussionIn my experience, every GB is worlds better. Going from 2 to 3 was big, and going from 3 to 4 was huge. It all depends on the drivers, if they play nice with Mari on your platform then its worth it if you are doing insanely large and/or inefficient Mari texturing.
Known issue in 2.6v4 on windows. I am unaware if they released a hot fix for this yet. Or if we have to wait for a release.
In order for this to work, "Cache up to here" would need a start and end point, right now it only has an end point and assumes the bottom of the stack as a start point. So from what I can tell, this is not possible. As I understand it, anything above the layer you wish to keep live would have to be cached individually.
Oh you are right, I was assuming the dual GPU. X and Z, gets confusing.
Is the Titan X on the same architecture as the Titan Black?
Is the Titan X on the same architecture as the Titan Black?
I don't think the Foundry has communicated any intention of supporting Nvidia SLI or UMA?
Unless something has changed, it is the same as before, you get 1 GPU and its accompanying memory.
Unless something has changed, it is the same as before, you get 1 GPU and its accompanying memory.
Perl? whats that? :p
Also, to be clear, the python module I posted does not rename, it simply allows you to use it in your scripts as a conversion function.
More of a tool then a solution, that's the idea I was going for on building this toolset.
Also, to be clear, the python module I posted does not rename, it simply allows you to use it in your scripts as a conversion function.
More of a tool then a solution, that's the idea I was going for on building this toolset.
I started this module for converting between different UV coordinate notations. Please feel free to test it, if you find a bug or want to add features, feel free to branch off your own (and submit a pull request, lets collaborate!). I intend to add other python utilities in here, if anyone has suggestions let me know, I would like to have a collection of often used functions. It seems like most of the time, we TDs tend to re-write this stuff over and over.
https://github.com/bneall/TexUtil
https://github.com/bneall/TexUtil
im using:
Adobe Photoshop Version: 2014.2.2 20141204.r.310 2014/12/04:23:59:59 CL 994532 x64
Operating System: Windows 7 64-bit
Version: 6.1 Service Pack 1
Even so, it would save a step for me. I wonder if they could hook up the resource compiler straight to Mari, and just skip the tif all together.
Anyway, I digress, I am definitely interested in how he did it.
Anyway, I digress, I am definitely interested in how he did it.
This would all be a non-issue for me we had the CryTif exporter for Mari :)
Later when I get back on linux, I will test this out with Nuke and others, see whats going on exactly.
Later when I get back on linux, I will test this out with Nuke and others, see whats going on exactly.
Ok figured it out. This what I get in photoshop from exporing following:
.tif gives me premultiplied against black.
.tiff, .png, and .psd give me alpha applied as transparency
and finally, .tga gives me intact RGB and Alpha together.
No idea why this would differ across different machines with the same version of Mari, but at least its working now.
.tif gives me premultiplied against black.
.tiff, .png, and .psd give me alpha applied as transparency
and finally, .tga gives me intact RGB and Alpha together.
No idea why this would differ across different machines with the same version of Mari, but at least its working now.
I'll try directly with layers tomorrow. I'm using 2.6v4 on Windows for this.
Well I can't get it to work. I'm not sure what step I am missing. Here is what I get when I follow JeruL01's instructions:
Then doing:
Select Composite Channel > Export Flattened Channel

Layer Setup:
Composite Channel
Color Group Layer(CopyRGB)
Color Channel Layer
Alpha Group Layer (Copy) (Advanced Blend > Swizzle > Red=1, Green=1, Blue=1, Alpha=Red)
Alpha Channel Layer
Then doing:
Select Composite Channel > Export Flattened Channel
