experience- 1-5 years of experience in CA firm
knowledge - filing GST and TDS returns, bookkeeping, exposure in audits.
About finhence
Similar jobs
Job Responsibilities.
- Working with a wide range of media and using graphic design software.
- Thinking creatively and developing new design concepts, graphics, and layouts.
- Cultivate a solid body of work.
- Take the design “brief” to record requirements and clients' needs.
- Prepare rough drafts and present your ideas.
- Amend final designs to clients' comments and gain full approval.
- Work as part of a team with copywriters, designers, stylists, executives, etc.
- Motion Graphics and Video Editing
Requirement
- Proven graphic designing experience.
- Possession of creative flair, versatility, conceptual/visual ability and originality.
- Demonstrable graphic design skills with a strong portfolio.
- Ability to interact, communicate and present ideas.
- Up to date with industry-leading software and technologies.
- Highly proficient in all design aspects.
- Professionalism regarding time, costs, and deadlines.
o You’re both relentless and kind, and don’t see these as being mutually
exclusive
o You have a self-directed learning style, an insatiable curiosity, and a
hands-on execution mindset
o You have deep experience working with product and engineering teams
to launch machine learning products that users love in new or rapidly
evolving markets
o You flourish in uncertain environments and can turn incomplete,
conflicting, or ambiguous inputs into solid data-science action plans
o You bring best practices to feature engineering, model development, and
ML operations
o Your experience in deploying and monitoring the performance of models
in production enables us to implement a best-in-class solution
o You have exceptional writing and speaking skills with a talent for
articulating how data science can be applied to solve customer problems
Must-Have Qualifications
o Graduate degree in engineering, data science, mathematics, physics, or
another quantitative field
o 5+ years of hands-on experience in building and deploying production-
grade ML models with ML frameworks (TensorFlow, Keras, PyTorch) and
libraries like scikit-learn
o Track-record in building ML pipelines for time series, classification, and
predictive applications
o Expert level skills in Python for data analysis and visualization, hypothesis
testing, and model building
o Deep experience with ensemble ML approaches including random forests
and xgboost, and experience with databases and querying models for
structured and unstructured data
o A knack for using data visualization and analysis tools to tell a story
o You naturally think quantitatively about problems and work backward
from a customer outcome
What’ll make you stand out (but not required)
o You have a keen awareness or interest in network analysis/graph analysis
or NLP
o You have experience in distributed systems and graph databases
o You have a strong connection to finance teams or closely related
domains, the challenges they face, and a deep appreciation for their
aspirations
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
Job responsibility
- A strong understanding or work experience of all phases of software development life cycle including analysis, design, functionality, testing and support.
- Deep experience with troubleshooting, HTTP traffic inspection, authentication methodologies, session management and browser automation.
- Experience with unit testing frameworks and TDD.
- Familiarity with architecture styles/APIs (REST, RPC).
- Able to develop large scale web/database applications.
- Capable of working on multiple projects with meeting multiple deadlines.
- Maintaining and supporting multiple projects and deadlines.
- Ability to innovate and provide functional applications with intuitive interfaces.
- Capable of working in team environment& individually on projects.
- Smart enough to work with Project Manager to determine needs and applying / customizing existing technology to meet those needs.
- Ability to handle clients, understanding client requirements, coding, designing on .dot net platform, C#, asp.net. Hand on Experience with OOPS concept and Design patterns.
- Ability to construct user guides and Project Documentation, Project Progress Report and other documentation.
Technical Skills required:
.Net with c# core, Ajax, MVC, linq, entity framework, ADO.net, Java script, JQuery, SQL Server, Angular (advance), Socket programming.
- Guiding team member for handling technical challenges
- Conducting training sessions
- Handling user issues and providing corrective solution
- Fixing of the vulnerabilities and Upgrades of new stable version
- Sustenance and maintenance of Archer tool
- Good scripting knowledge
Desired Candidate Profiles:
- Certified RSA Archer Professional, Internal Audit & Controls, Risk (Threat), ISO 27001
- Minimum of 5 years’ experience in the respective field
- Experience of managing a GRC Team
- Strong experience of implementation, commissioning and enhancement of modules of GRC Product
- Strong Understanding of Process workflows, Identifying the manual workflows
- Expertise in configuring GRC tool (Archer)
- Experience with all SDLC activities related to GRC program implementation
Good to have : Oracle, WAS/Tomcat server knowledge and basic knowledge of shell scripting.
-
Owns the end to end implementation of the assigned data processing components/product features i.e. design, development, dep
loyment, and testing of the data processing components and associated flows conforming to best coding practices -
Creation and optimization of data engineering pipelines for analytics projects.
-
Support data and cloud transformation initiatives
-
Contribute to our cloud strategy based on prior experience
-
Independently work with all stakeholders across the organization to deliver enhanced functionalities
-
Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting
-
Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data
-
Be able to comprehend the application of database index and transactions
-
Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis
, design and programming skills, and design patterns -
Implement ETL workflows for data matching, data cleansing, data integration, and management
-
Maintain existing data pipelines, and develop new data pipeline using big data technologies
-
Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform
Responsibilities will include:
• Working on the full stack, from UI elements to database performance.
• Building new features from start to finish and back end to front end, including architecture design, coding, testing, and supporting in production.
• Tackling a wide variety of technical issues throughout our stack and contributing to all parts of the code base.
Key things we'll be looking for:
• Functional, real world programming experience (vs. theoretical).
• 4 to 6 years experience in Python, Django.
• The ability to hold your own on the front end and the back end.
• SQL database experience (PostgreSQL, MySQL, SQLite).
• Experience with deployments to Heroku, AWS, or similar.
• A track record of balancing fast-and-dirty and long-term code maintainability.
• Experience in integrating 3rd party APIs and payment gateways.
• Experience working on a fintech or payments technology product.
Extra credit if you have:
• Experience with Postgres database administration.
• Shipped code that has supported tens of thousands of users, including enterprise/business customers.
• Experience in operations and how it pertains to site reliability/speed.
• Familiarity with networking, caching, database, and how to triage issues.
• A good eye for intuitive, user-friendly front end design.
Skills required:
- knowledge of Linux administration and Bash,
- great overall coding experience, knowledge of algorithms, etc,
- experience with Git,
- experience with Docker.
Typical tasks would consist of writing scripts for:
- server automation,
- container deployment,
- user data migration,
- backup management.